May 13 00:41:26.934429 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:41:26.934450 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:26.934460 kernel: BIOS-provided physical RAM map: May 13 00:41:26.934466 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:41:26.934471 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:41:26.934477 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:41:26.934484 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:41:26.934490 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:41:26.934496 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:41:26.934503 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:41:26.934508 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 13 00:41:26.934514 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 13 00:41:26.934520 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:41:26.934525 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:41:26.934533 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:41:26.934540 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:41:26.934547 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:41:26.934553 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:26.934559 kernel: NX (Execute Disable) protection: active May 13 00:41:26.934565 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 13 00:41:26.934571 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 13 00:41:26.934588 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 13 00:41:26.934594 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 13 00:41:26.934599 kernel: extended physical RAM map: May 13 00:41:26.934606 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:41:26.934613 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:41:26.934619 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:41:26.934626 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:41:26.934632 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:41:26.934638 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:41:26.934644 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:41:26.934650 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 13 00:41:26.934656 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 13 00:41:26.934662 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 13 00:41:26.934668 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 13 00:41:26.934674 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 13 00:41:26.934682 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 13 00:41:26.934688 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:41:26.934694 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:41:26.934700 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:41:26.934709 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:41:26.934716 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:41:26.934722 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:26.934730 kernel: efi: EFI v2.70 by EDK II May 13 00:41:26.934737 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 13 00:41:26.934743 kernel: random: crng init done May 13 00:41:26.934750 kernel: SMBIOS 2.8 present. May 13 00:41:26.934756 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 13 00:41:26.934763 kernel: Hypervisor detected: KVM May 13 00:41:26.934769 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:41:26.934776 kernel: kvm-clock: cpu 0, msr 52196001, primary cpu clock May 13 00:41:26.934782 kernel: kvm-clock: using sched offset of 4539356254 cycles May 13 00:41:26.934793 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:41:26.934800 kernel: tsc: Detected 2794.748 MHz processor May 13 00:41:26.934807 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:41:26.934814 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:41:26.934821 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 13 00:41:26.934827 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:41:26.934834 kernel: Using GB pages for direct mapping May 13 00:41:26.934841 kernel: Secure boot disabled May 13 00:41:26.934847 kernel: ACPI: Early table checksum verification disabled May 13 00:41:26.934856 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 00:41:26.934862 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 00:41:26.934869 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:26.934876 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:26.934883 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 00:41:26.934889 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:26.934896 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:26.934903 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:26.934909 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:26.934918 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 00:41:26.934924 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 00:41:26.934931 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 00:41:26.934938 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 00:41:26.934945 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 00:41:26.934951 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 00:41:26.934958 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 00:41:26.934972 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 00:41:26.934978 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 00:41:26.934986 kernel: No NUMA configuration found May 13 00:41:26.934994 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 13 00:41:26.935001 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 13 00:41:26.935007 kernel: Zone ranges: May 13 00:41:26.935014 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:41:26.935021 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 13 00:41:26.935027 kernel: Normal empty May 13 00:41:26.935034 kernel: Movable zone start for each node May 13 00:41:26.935040 kernel: Early memory node ranges May 13 00:41:26.935048 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 00:41:26.935055 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 00:41:26.935062 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 00:41:26.935069 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 13 00:41:26.935075 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 13 00:41:26.935082 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 13 00:41:26.935089 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 13 00:41:26.935095 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:26.935102 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 00:41:26.935108 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 00:41:26.935117 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:26.935123 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 13 00:41:26.935130 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 00:41:26.935137 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 13 00:41:26.935143 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:41:26.935150 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:41:26.935157 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:41:26.935163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:41:26.935170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:41:26.935178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:41:26.935185 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:41:26.935191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:41:26.935201 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:41:26.935207 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:41:26.935214 kernel: TSC deadline timer available May 13 00:41:26.935221 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:41:26.935227 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:41:26.935234 kernel: kvm-guest: setup PV sched yield May 13 00:41:26.935242 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 13 00:41:26.935250 kernel: Booting paravirtualized kernel on KVM May 13 00:41:26.935261 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:41:26.935269 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:41:26.935277 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:41:26.935283 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:41:26.935290 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:41:26.935297 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:41:26.935304 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 13 00:41:26.935311 kernel: kvm-guest: PV spinlocks enabled May 13 00:41:26.935318 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:41:26.935325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 13 00:41:26.935333 kernel: Policy zone: DMA32 May 13 00:41:26.935341 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:26.935349 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:41:26.935356 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:41:26.935364 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:41:26.935371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:41:26.935379 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 169308K reserved, 0K cma-reserved) May 13 00:41:26.935386 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:41:26.935393 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:41:26.935400 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:41:26.935407 kernel: rcu: Hierarchical RCU implementation. May 13 00:41:26.935415 kernel: rcu: RCU event tracing is enabled. May 13 00:41:26.935422 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:41:26.935431 kernel: Rude variant of Tasks RCU enabled. May 13 00:41:26.935438 kernel: Tracing variant of Tasks RCU enabled. May 13 00:41:26.935445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:41:26.935452 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:41:26.935459 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:41:26.935466 kernel: Console: colour dummy device 80x25 May 13 00:41:26.935473 kernel: printk: console [ttyS0] enabled May 13 00:41:26.935480 kernel: ACPI: Core revision 20210730 May 13 00:41:26.935488 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:41:26.935496 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:41:26.935503 kernel: x2apic enabled May 13 00:41:26.935510 kernel: Switched APIC routing to physical x2apic. May 13 00:41:26.935517 kernel: kvm-guest: setup PV IPIs May 13 00:41:26.935524 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:41:26.935531 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:41:26.935538 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:41:26.935545 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:41:26.935552 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:41:26.935561 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:41:26.935568 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:41:26.935585 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:41:26.935592 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:41:26.935599 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:41:26.935606 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:41:26.935613 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:41:26.935623 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:41:26.935630 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:41:26.935639 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:41:26.935646 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:41:26.935653 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:41:26.935661 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:41:26.935668 kernel: Freeing SMP alternatives memory: 32K May 13 00:41:26.935675 kernel: pid_max: default: 32768 minimum: 301 May 13 00:41:26.935682 kernel: LSM: Security Framework initializing May 13 00:41:26.935689 kernel: SELinux: Initializing. May 13 00:41:26.935696 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:26.935704 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:26.935711 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:41:26.935719 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:41:26.935726 kernel: ... version: 0 May 13 00:41:26.935732 kernel: ... bit width: 48 May 13 00:41:26.935739 kernel: ... generic registers: 6 May 13 00:41:26.935746 kernel: ... value mask: 0000ffffffffffff May 13 00:41:26.935753 kernel: ... max period: 00007fffffffffff May 13 00:41:26.935760 kernel: ... fixed-purpose events: 0 May 13 00:41:26.935768 kernel: ... event mask: 000000000000003f May 13 00:41:26.935776 kernel: signal: max sigframe size: 1776 May 13 00:41:26.935783 kernel: rcu: Hierarchical SRCU implementation. May 13 00:41:26.935790 kernel: smp: Bringing up secondary CPUs ... May 13 00:41:26.935796 kernel: x86: Booting SMP configuration: May 13 00:41:26.935804 kernel: .... node #0, CPUs: #1 May 13 00:41:26.935811 kernel: kvm-clock: cpu 1, msr 52196041, secondary cpu clock May 13 00:41:26.935818 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:41:26.935825 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 13 00:41:26.935833 kernel: #2 May 13 00:41:26.935840 kernel: kvm-clock: cpu 2, msr 52196081, secondary cpu clock May 13 00:41:26.935847 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:41:26.935854 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 13 00:41:26.935861 kernel: #3 May 13 00:41:26.935868 kernel: kvm-clock: cpu 3, msr 521960c1, secondary cpu clock May 13 00:41:26.935875 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:41:26.935882 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 13 00:41:26.935889 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:41:26.935896 kernel: smpboot: Max logical packages: 1 May 13 00:41:26.935905 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:41:26.935912 kernel: devtmpfs: initialized May 13 00:41:26.935919 kernel: x86/mm: Memory block size: 128MB May 13 00:41:26.935926 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 00:41:26.935933 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 00:41:26.935940 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 13 00:41:26.935947 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 00:41:26.935955 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 00:41:26.935962 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:41:26.935977 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:41:26.935984 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:41:26.935991 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:41:26.935998 kernel: audit: initializing netlink subsys (disabled) May 13 00:41:26.936006 kernel: audit: type=2000 audit(1747096885.692:1): state=initialized audit_enabled=0 res=1 May 13 00:41:26.936013 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:41:26.936020 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:41:26.936027 kernel: cpuidle: using governor menu May 13 00:41:26.936035 kernel: ACPI: bus type PCI registered May 13 00:41:26.936042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:41:26.936049 kernel: dca service started, version 1.12.1 May 13 00:41:26.936056 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:41:26.936063 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:41:26.936070 kernel: PCI: Using configuration type 1 for base access May 13 00:41:26.936078 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:41:26.936085 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:41:26.936092 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:41:26.936101 kernel: ACPI: Added _OSI(Module Device) May 13 00:41:26.936108 kernel: ACPI: Added _OSI(Processor Device) May 13 00:41:26.936115 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:41:26.936122 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:41:26.936129 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:41:26.936136 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:41:26.936143 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:41:26.936150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:41:26.936157 kernel: ACPI: Interpreter enabled May 13 00:41:26.936164 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:41:26.936172 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:41:26.936179 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:41:26.936187 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:41:26.936194 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:41:26.936333 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:41:26.936411 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:41:26.936485 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:41:26.936496 kernel: PCI host bridge to bus 0000:00 May 13 00:41:26.936628 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:41:26.936704 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:41:26.936770 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:41:26.936835 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:41:26.936900 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:41:26.936975 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 13 00:41:26.937045 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:41:26.937147 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:41:26.937238 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:41:26.937314 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 00:41:26.937387 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 00:41:26.937459 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 00:41:26.937531 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 00:41:26.937628 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:41:26.937713 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:41:26.937792 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 00:41:26.937868 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 00:41:26.937944 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 13 00:41:26.938044 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:41:26.938199 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 00:41:26.938278 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 00:41:26.938353 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 13 00:41:26.938443 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:41:26.938518 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 00:41:26.938607 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 00:41:26.938687 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 13 00:41:26.938766 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 00:41:26.938853 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:41:26.938926 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:41:26.939023 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:41:26.939097 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 00:41:26.939169 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 00:41:26.939265 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:41:26.939361 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 00:41:26.939371 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:41:26.939379 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:41:26.939386 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:41:26.939393 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:41:26.939400 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:41:26.939407 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:41:26.939414 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:41:26.939424 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:41:26.939431 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:41:26.939438 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:41:26.939445 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:41:26.939452 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:41:26.939459 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:41:26.939466 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:41:26.939473 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:41:26.939480 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:41:26.939488 kernel: iommu: Default domain type: Translated May 13 00:41:26.939495 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:41:26.939568 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:41:26.939656 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:41:26.939728 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:41:26.939737 kernel: vgaarb: loaded May 13 00:41:26.939744 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:41:26.939752 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:41:26.939759 kernel: PTP clock support registered May 13 00:41:26.939768 kernel: Registered efivars operations May 13 00:41:26.939775 kernel: PCI: Using ACPI for IRQ routing May 13 00:41:26.939782 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:41:26.939789 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 00:41:26.939796 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 13 00:41:26.939803 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 13 00:41:26.939810 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 13 00:41:26.939817 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 13 00:41:26.939824 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 13 00:41:26.939833 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:41:26.939840 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:41:26.939847 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:41:26.939854 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:41:26.939861 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:41:26.939868 kernel: pnp: PnP ACPI init May 13 00:41:26.939976 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:41:26.939993 kernel: pnp: PnP ACPI: found 6 devices May 13 00:41:26.940001 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:41:26.940010 kernel: NET: Registered PF_INET protocol family May 13 00:41:26.940018 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:41:26.940027 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:41:26.940036 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:41:26.940044 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:41:26.940053 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:41:26.940062 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:41:26.940073 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:26.940082 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:26.940090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:41:26.940099 kernel: NET: Registered PF_XDP protocol family May 13 00:41:26.940188 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 00:41:26.940266 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 00:41:26.940334 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:41:26.940400 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:41:26.940470 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:41:26.940535 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:41:26.940614 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:41:26.940681 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 13 00:41:26.940701 kernel: PCI: CLS 0 bytes, default 64 May 13 00:41:26.940709 kernel: Initialise system trusted keyrings May 13 00:41:26.940716 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:41:26.940723 kernel: Key type asymmetric registered May 13 00:41:26.940730 kernel: Asymmetric key parser 'x509' registered May 13 00:41:26.940740 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:41:26.940748 kernel: io scheduler mq-deadline registered May 13 00:41:26.940764 kernel: io scheduler kyber registered May 13 00:41:26.940773 kernel: io scheduler bfq registered May 13 00:41:26.940781 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:41:26.940788 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:41:26.940796 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:41:26.940804 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:41:26.940811 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:41:26.940821 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:41:26.940828 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:41:26.940836 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:41:26.940843 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:41:26.940851 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:41:26.940941 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:41:26.941026 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:41:26.941095 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:41:26 UTC (1747096886) May 13 00:41:26.941167 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:41:26.941177 kernel: efifb: probing for efifb May 13 00:41:26.941184 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 00:41:26.941192 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 00:41:26.941199 kernel: efifb: scrolling: redraw May 13 00:41:26.941207 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 00:41:26.941214 kernel: Console: switching to colour frame buffer device 160x50 May 13 00:41:26.941221 kernel: fb0: EFI VGA frame buffer device May 13 00:41:26.941229 kernel: pstore: Registered efi as persistent store backend May 13 00:41:26.941238 kernel: NET: Registered PF_INET6 protocol family May 13 00:41:26.941246 kernel: Segment Routing with IPv6 May 13 00:41:26.941254 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:41:26.941262 kernel: NET: Registered PF_PACKET protocol family May 13 00:41:26.941270 kernel: Key type dns_resolver registered May 13 00:41:26.941277 kernel: IPI shorthand broadcast: enabled May 13 00:41:26.941286 kernel: sched_clock: Marking stable (497350314, 128563070)->(647350074, -21436690) May 13 00:41:26.941293 kernel: registered taskstats version 1 May 13 00:41:26.941301 kernel: Loading compiled-in X.509 certificates May 13 00:41:26.941310 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:41:26.941317 kernel: Key type .fscrypt registered May 13 00:41:26.941324 kernel: Key type fscrypt-provisioning registered May 13 00:41:26.941332 kernel: pstore: Using crash dump compression: deflate May 13 00:41:26.941339 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:41:26.941348 kernel: ima: Allocated hash algorithm: sha1 May 13 00:41:26.941356 kernel: ima: No architecture policies found May 13 00:41:26.941363 kernel: clk: Disabling unused clocks May 13 00:41:26.941371 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:41:26.941378 kernel: Write protecting the kernel read-only data: 28672k May 13 00:41:26.941386 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:41:26.941393 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:41:26.941400 kernel: Run /init as init process May 13 00:41:26.941408 kernel: with arguments: May 13 00:41:26.941417 kernel: /init May 13 00:41:26.941424 kernel: with environment: May 13 00:41:26.941431 kernel: HOME=/ May 13 00:41:26.941438 kernel: TERM=linux May 13 00:41:26.941445 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:41:26.941455 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:26.941465 systemd[1]: Detected virtualization kvm. May 13 00:41:26.941473 systemd[1]: Detected architecture x86-64. May 13 00:41:26.941482 systemd[1]: Running in initrd. May 13 00:41:26.941489 systemd[1]: No hostname configured, using default hostname. May 13 00:41:26.941497 systemd[1]: Hostname set to . May 13 00:41:26.941505 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:26.941513 systemd[1]: Queued start job for default target initrd.target. May 13 00:41:26.941520 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:26.941554 systemd[1]: Reached target cryptsetup.target. May 13 00:41:26.941599 systemd[1]: Reached target paths.target. May 13 00:41:26.941608 systemd[1]: Reached target slices.target. May 13 00:41:26.941618 systemd[1]: Reached target swap.target. May 13 00:41:26.941626 systemd[1]: Reached target timers.target. May 13 00:41:26.941634 systemd[1]: Listening on iscsid.socket. May 13 00:41:26.941642 systemd[1]: Listening on iscsiuio.socket. May 13 00:41:26.941650 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:41:26.941658 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:41:26.941666 systemd[1]: Listening on systemd-journald.socket. May 13 00:41:26.941675 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:26.941682 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:26.941691 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:26.941698 systemd[1]: Reached target sockets.target. May 13 00:41:26.941706 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:26.941714 systemd[1]: Finished network-cleanup.service. May 13 00:41:26.941722 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:41:26.941730 systemd[1]: Starting systemd-journald.service... May 13 00:41:26.941737 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:26.941747 systemd[1]: Starting systemd-resolved.service... May 13 00:41:26.941755 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:41:26.941762 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:26.941770 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:41:26.941778 kernel: audit: type=1130 audit(1747096886.932:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.941786 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:41:26.941794 kernel: audit: type=1130 audit(1747096886.937:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.941802 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:41:26.941814 systemd-journald[198]: Journal started May 13 00:41:26.941855 systemd-journald[198]: Runtime Journal (/run/log/journal/e152a106c2674e849075e524db863b43) is 6.0M, max 48.4M, 42.4M free. May 13 00:41:26.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.942488 systemd-modules-load[199]: Inserted module 'overlay' May 13 00:41:26.944601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:41:26.946607 systemd[1]: Started systemd-journald.service. May 13 00:41:26.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.951842 kernel: audit: type=1130 audit(1747096886.945:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.951425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:41:26.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.960295 kernel: audit: type=1130 audit(1747096886.950:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.962294 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:41:26.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.963857 systemd[1]: Starting dracut-cmdline.service... May 13 00:41:26.968649 kernel: audit: type=1130 audit(1747096886.962:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.974933 dracut-cmdline[216]: dracut-dracut-053 May 13 00:41:26.977979 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:26.985927 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:41:26.988845 kernel: audit: type=1130 audit(1747096886.985:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:26.980225 systemd-resolved[200]: Positive Trust Anchors: May 13 00:41:26.980233 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:26.993034 kernel: Bridge firewalling registered May 13 00:41:26.980266 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:26.982986 systemd-resolved[200]: Defaulting to hostname 'linux'. May 13 00:41:26.984523 systemd[1]: Started systemd-resolved.service. May 13 00:41:26.986014 systemd[1]: Reached target nss-lookup.target. May 13 00:41:26.991380 systemd-modules-load[199]: Inserted module 'br_netfilter' May 13 00:41:27.009598 kernel: SCSI subsystem initialized May 13 00:41:27.021621 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:41:27.021688 kernel: device-mapper: uevent: version 1.0.3 May 13 00:41:27.021700 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:41:27.024415 systemd-modules-load[199]: Inserted module 'dm_multipath' May 13 00:41:27.025171 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:27.030868 kernel: audit: type=1130 audit(1747096887.025:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.027091 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:27.035970 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:27.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.040601 kernel: audit: type=1130 audit(1747096887.036:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.042596 kernel: Loading iSCSI transport class v2.0-870. May 13 00:41:27.057614 kernel: iscsi: registered transport (tcp) May 13 00:41:27.079609 kernel: iscsi: registered transport (qla4xxx) May 13 00:41:27.079640 kernel: QLogic iSCSI HBA Driver May 13 00:41:27.110687 systemd[1]: Finished dracut-cmdline.service. May 13 00:41:27.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.111764 systemd[1]: Starting dracut-pre-udev.service... May 13 00:41:27.116623 kernel: audit: type=1130 audit(1747096887.110:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.157622 kernel: raid6: avx2x4 gen() 30298 MB/s May 13 00:41:27.174605 kernel: raid6: avx2x4 xor() 7831 MB/s May 13 00:41:27.191600 kernel: raid6: avx2x2 gen() 32058 MB/s May 13 00:41:27.208597 kernel: raid6: avx2x2 xor() 18919 MB/s May 13 00:41:27.225599 kernel: raid6: avx2x1 gen() 26008 MB/s May 13 00:41:27.242598 kernel: raid6: avx2x1 xor() 15270 MB/s May 13 00:41:27.259606 kernel: raid6: sse2x4 gen() 14752 MB/s May 13 00:41:27.276605 kernel: raid6: sse2x4 xor() 7336 MB/s May 13 00:41:27.293602 kernel: raid6: sse2x2 gen() 16353 MB/s May 13 00:41:27.310600 kernel: raid6: sse2x2 xor() 9823 MB/s May 13 00:41:27.327598 kernel: raid6: sse2x1 gen() 12276 MB/s May 13 00:41:27.344988 kernel: raid6: sse2x1 xor() 7798 MB/s May 13 00:41:27.345010 kernel: raid6: using algorithm avx2x2 gen() 32058 MB/s May 13 00:41:27.345037 kernel: raid6: .... xor() 18919 MB/s, rmw enabled May 13 00:41:27.345698 kernel: raid6: using avx2x2 recovery algorithm May 13 00:41:27.357600 kernel: xor: automatically using best checksumming function avx May 13 00:41:27.446609 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:41:27.455626 systemd[1]: Finished dracut-pre-udev.service. May 13 00:41:27.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.457000 audit: BPF prog-id=7 op=LOAD May 13 00:41:27.457000 audit: BPF prog-id=8 op=LOAD May 13 00:41:27.457700 systemd[1]: Starting systemd-udevd.service... May 13 00:41:27.470447 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 13 00:41:27.475308 systemd[1]: Started systemd-udevd.service. May 13 00:41:27.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.476142 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:41:27.486855 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 13 00:41:27.512735 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:41:27.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.514608 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:27.549798 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:27.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:27.581596 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:41:27.587033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:41:27.587046 kernel: GPT:9289727 != 19775487 May 13 00:41:27.587055 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:41:27.587064 kernel: GPT:9289727 != 19775487 May 13 00:41:27.587073 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:41:27.587086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:27.590593 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:41:27.598638 kernel: libata version 3.00 loaded. May 13 00:41:27.600818 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:41:27.600842 kernel: AES CTR mode by8 optimization enabled May 13 00:41:27.604600 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:41:27.619747 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:41:27.619763 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:41:27.619857 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:41:27.619941 kernel: scsi host0: ahci May 13 00:41:27.620084 kernel: scsi host1: ahci May 13 00:41:27.620177 kernel: scsi host2: ahci May 13 00:41:27.620263 kernel: scsi host3: ahci May 13 00:41:27.620355 kernel: scsi host4: ahci May 13 00:41:27.620447 kernel: scsi host5: ahci May 13 00:41:27.620550 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 00:41:27.620561 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 00:41:27.620572 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 00:41:27.620616 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 00:41:27.620626 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 00:41:27.620634 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 00:41:27.629600 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) May 13 00:41:27.631939 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:41:27.634866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:41:27.638860 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:41:27.645153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:27.649038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:41:27.651714 systemd[1]: Starting disk-uuid.service... May 13 00:41:27.658865 disk-uuid[523]: Primary Header is updated. May 13 00:41:27.658865 disk-uuid[523]: Secondary Entries is updated. May 13 00:41:27.658865 disk-uuid[523]: Secondary Header is updated. May 13 00:41:27.662654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:27.664609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:27.933002 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:41:27.933080 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:41:27.933090 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:41:27.933111 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:41:27.934626 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:41:27.935606 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:41:27.936615 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:41:27.937637 kernel: ata3.00: applying bridge limits May 13 00:41:27.938610 kernel: ata3.00: configured for UDMA/100 May 13 00:41:27.938629 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:41:27.971613 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:41:27.988308 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:41:27.988339 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:41:28.692179 disk-uuid[524]: The operation has completed successfully. May 13 00:41:28.693845 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:28.716323 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:41:28.716458 systemd[1]: Finished disk-uuid.service. May 13 00:41:28.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.728687 systemd[1]: Starting verity-setup.service... May 13 00:41:28.740599 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:41:28.760836 systemd[1]: Found device dev-mapper-usr.device. May 13 00:41:28.782705 systemd[1]: Mounting sysusr-usr.mount... May 13 00:41:28.784678 systemd[1]: Finished verity-setup.service. May 13 00:41:28.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.842604 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:41:28.842657 systemd[1]: Mounted sysusr-usr.mount. May 13 00:41:28.844266 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:41:28.846305 systemd[1]: Starting ignition-setup.service... May 13 00:41:28.848354 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:41:28.856267 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:28.856321 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:28.856352 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:28.863564 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:41:28.878350 systemd[1]: Finished ignition-setup.service. May 13 00:41:28.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.879309 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:41:28.915935 ignition[652]: Ignition 2.14.0 May 13 00:41:28.915945 ignition[652]: Stage: fetch-offline May 13 00:41:28.915993 ignition[652]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:28.916001 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:28.918636 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:41:28.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.921000 audit: BPF prog-id=9 op=LOAD May 13 00:41:28.916094 ignition[652]: parsed url from cmdline: "" May 13 00:41:28.916098 ignition[652]: no config URL provided May 13 00:41:28.916102 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:41:28.922368 systemd[1]: Starting systemd-networkd.service... May 13 00:41:28.916110 ignition[652]: no config at "/usr/lib/ignition/user.ign" May 13 00:41:28.916126 ignition[652]: op(1): [started] loading QEMU firmware config module May 13 00:41:28.916132 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:41:28.923745 ignition[652]: op(1): [finished] loading QEMU firmware config module May 13 00:41:28.958268 systemd-networkd[721]: lo: Link UP May 13 00:41:28.958279 systemd-networkd[721]: lo: Gained carrier May 13 00:41:28.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.958736 systemd-networkd[721]: Enumeration completed May 13 00:41:28.958976 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:28.959079 systemd[1]: Started systemd-networkd.service. May 13 00:41:28.960348 systemd-networkd[721]: eth0: Link UP May 13 00:41:28.960351 systemd-networkd[721]: eth0: Gained carrier May 13 00:41:28.960730 systemd[1]: Reached target network.target. May 13 00:41:28.968486 systemd[1]: Starting iscsiuio.service... May 13 00:41:28.972425 systemd[1]: Started iscsiuio.service. May 13 00:41:28.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.974687 systemd[1]: Starting iscsid.service... May 13 00:41:28.977306 ignition[652]: parsing config with SHA512: 4129d3df0d5f4b222ca0192f4b7c641395132b317058cb8bece98c25729821a08eaf8f26c0d2bbd4726998aa053585453f6e0193d6e502a4d41a076635479b0d May 13 00:41:28.977480 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:28.977480 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:41:28.977480 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:41:28.977480 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:41:28.977480 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:28.977480 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:41:28.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.985231 ignition[652]: fetch-offline: fetch-offline passed May 13 00:41:28.979058 systemd[1]: Started iscsid.service. May 13 00:41:28.985284 ignition[652]: Ignition finished successfully May 13 00:41:28.984686 unknown[652]: fetched base config from "system" May 13 00:41:28.984692 unknown[652]: fetched user config from "qemu" May 13 00:41:28.993803 systemd[1]: Starting dracut-initqueue.service... May 13 00:41:28.995541 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:41:28.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:28.996657 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:28.997368 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:41:29.000831 systemd[1]: Starting ignition-kargs.service... May 13 00:41:29.003108 systemd[1]: Finished dracut-initqueue.service. May 13 00:41:29.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.004802 systemd[1]: Reached target remote-fs-pre.target. May 13 00:41:29.006452 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:29.008206 systemd[1]: Reached target remote-fs.target. May 13 00:41:29.009818 ignition[733]: Ignition 2.14.0 May 13 00:41:29.009828 ignition[733]: Stage: kargs May 13 00:41:29.009922 ignition[733]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:29.011433 systemd[1]: Starting dracut-pre-mount.service... May 13 00:41:29.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.009932 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:29.012656 systemd[1]: Finished ignition-kargs.service. May 13 00:41:29.010882 ignition[733]: kargs: kargs passed May 13 00:41:29.014914 systemd[1]: Starting ignition-disks.service... May 13 00:41:29.010930 ignition[733]: Ignition finished successfully May 13 00:41:29.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.019795 systemd[1]: Finished dracut-pre-mount.service. May 13 00:41:29.022106 ignition[742]: Ignition 2.14.0 May 13 00:41:29.022117 ignition[742]: Stage: disks May 13 00:41:29.022209 ignition[742]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:29.022219 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:29.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.024032 systemd[1]: Finished ignition-disks.service. May 13 00:41:29.023209 ignition[742]: disks: disks passed May 13 00:41:29.025306 systemd[1]: Reached target initrd-root-device.target. May 13 00:41:29.023248 ignition[742]: Ignition finished successfully May 13 00:41:29.027172 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:29.028007 systemd[1]: Reached target local-fs.target. May 13 00:41:29.028781 systemd[1]: Reached target sysinit.target. May 13 00:41:29.030305 systemd[1]: Reached target basic.target. May 13 00:41:29.031701 systemd[1]: Starting systemd-fsck-root.service... May 13 00:41:29.041843 systemd-fsck[754]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:41:29.047705 systemd[1]: Finished systemd-fsck-root.service. May 13 00:41:29.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.050752 systemd[1]: Mounting sysroot.mount... May 13 00:41:29.057596 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:41:29.057877 systemd[1]: Mounted sysroot.mount. May 13 00:41:29.058016 systemd[1]: Reached target initrd-root-fs.target. May 13 00:41:29.060924 systemd[1]: Mounting sysroot-usr.mount... May 13 00:41:29.062500 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:41:29.062547 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:41:29.062572 systemd[1]: Reached target ignition-diskful.target. May 13 00:41:29.069702 systemd[1]: Mounted sysroot-usr.mount. May 13 00:41:29.071141 systemd[1]: Starting initrd-setup-root.service... May 13 00:41:29.076876 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:41:29.081315 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory May 13 00:41:29.084406 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:41:29.088169 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:41:29.115835 systemd[1]: Finished initrd-setup-root.service. May 13 00:41:29.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.117542 systemd[1]: Starting ignition-mount.service... May 13 00:41:29.118358 systemd[1]: Starting sysroot-boot.service... May 13 00:41:29.125724 bash[806]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:41:29.133204 systemd[1]: Finished sysroot-boot.service. May 13 00:41:29.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.135013 ignition[807]: INFO : Ignition 2.14.0 May 13 00:41:29.135013 ignition[807]: INFO : Stage: mount May 13 00:41:29.135013 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:29.135013 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:29.138902 ignition[807]: INFO : mount: mount passed May 13 00:41:29.138902 ignition[807]: INFO : Ignition finished successfully May 13 00:41:29.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:29.139198 systemd[1]: Finished ignition-mount.service. May 13 00:41:29.792060 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:41:29.799605 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 13 00:41:29.801634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:29.801649 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:29.801663 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:29.805200 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:41:29.806661 systemd[1]: Starting ignition-files.service... May 13 00:41:29.820976 ignition[835]: INFO : Ignition 2.14.0 May 13 00:41:29.820976 ignition[835]: INFO : Stage: files May 13 00:41:29.822732 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:29.822732 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:29.822732 ignition[835]: DEBUG : files: compiled without relabeling support, skipping May 13 00:41:29.826478 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:41:29.826478 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:41:29.826478 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:41:29.826478 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:41:29.826478 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:41:29.826478 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:41:29.826478 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:41:29.825419 unknown[835]: wrote ssh authorized keys file for user: core May 13 00:41:30.196813 systemd-networkd[721]: eth0: Gained IPv6LL May 13 00:41:30.978748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:41:31.269632 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:41:31.271744 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:41:31.271744 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 00:41:31.767913 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:41:31.876200 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:41:31.876200 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:41:31.879752 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:41:31.879752 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:31.879752 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:31.879752 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:31.886627 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:41:32.336841 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:41:32.679762 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:32.679762 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:41:32.683757 ignition[835]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:32.712231 ignition[835]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:32.713898 ignition[835]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:41:32.713898 ignition[835]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 00:41:32.713898 ignition[835]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:41:32.713898 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:32.713898 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:32.713898 ignition[835]: INFO : files: files passed May 13 00:41:32.713898 ignition[835]: INFO : Ignition finished successfully May 13 00:41:32.723690 systemd[1]: Finished ignition-files.service. May 13 00:41:32.729152 kernel: kauditd_printk_skb: 24 callbacks suppressed May 13 00:41:32.729173 kernel: audit: type=1130 audit(1747096892.724:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.729189 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:41:32.729280 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:41:32.729880 systemd[1]: Starting ignition-quench.service... May 13 00:41:32.742332 kernel: audit: type=1130 audit(1747096892.735:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.742368 kernel: audit: type=1131 audit(1747096892.735:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.733132 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:41:32.733221 systemd[1]: Finished ignition-quench.service. May 13 00:41:32.745977 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:41:32.748748 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:41:32.750899 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:41:32.756306 kernel: audit: type=1130 audit(1747096892.751:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.752061 systemd[1]: Reached target ignition-complete.target. May 13 00:41:32.757905 systemd[1]: Starting initrd-parse-etc.service... May 13 00:41:32.770362 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:41:32.770452 systemd[1]: Finished initrd-parse-etc.service. May 13 00:41:32.779468 kernel: audit: type=1130 audit(1747096892.770:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.780227 kernel: audit: type=1131 audit(1747096892.770:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.771543 systemd[1]: Reached target initrd-fs.target. May 13 00:41:32.779450 systemd[1]: Reached target initrd.target. May 13 00:41:32.780251 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:41:32.780979 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:41:32.791407 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:41:32.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.793869 systemd[1]: Starting initrd-cleanup.service... May 13 00:41:32.797793 kernel: audit: type=1130 audit(1747096892.792:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.802134 systemd[1]: Stopped target nss-lookup.target. May 13 00:41:32.803181 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:41:32.805262 systemd[1]: Stopped target timers.target. May 13 00:41:32.807185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:41:32.813567 kernel: audit: type=1131 audit(1747096892.808:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.807283 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:41:32.809090 systemd[1]: Stopped target initrd.target. May 13 00:41:32.813759 systemd[1]: Stopped target basic.target. May 13 00:41:32.815257 systemd[1]: Stopped target ignition-complete.target. May 13 00:41:32.816964 systemd[1]: Stopped target ignition-diskful.target. May 13 00:41:32.818633 systemd[1]: Stopped target initrd-root-device.target. May 13 00:41:32.820428 systemd[1]: Stopped target remote-fs.target. May 13 00:41:32.822235 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:41:32.824106 systemd[1]: Stopped target sysinit.target. May 13 00:41:32.825533 systemd[1]: Stopped target local-fs.target. May 13 00:41:32.827135 systemd[1]: Stopped target local-fs-pre.target. May 13 00:41:32.828712 systemd[1]: Stopped target swap.target. May 13 00:41:32.836125 kernel: audit: type=1131 audit(1747096892.831:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.830193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:41:32.830307 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:41:32.842390 kernel: audit: type=1131 audit(1747096892.837:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.831903 systemd[1]: Stopped target cryptsetup.target. May 13 00:41:32.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.836164 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:41:32.836253 systemd[1]: Stopped dracut-initqueue.service. May 13 00:41:32.838057 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:41:32.838146 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:41:32.842512 systemd[1]: Stopped target paths.target. May 13 00:41:32.844019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:41:32.847613 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:41:32.848753 systemd[1]: Stopped target slices.target. May 13 00:41:32.850429 systemd[1]: Stopped target sockets.target. May 13 00:41:32.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.852462 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:41:32.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.852530 systemd[1]: Closed iscsid.socket. May 13 00:41:32.854210 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:41:32.854298 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:41:32.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.866751 ignition[876]: INFO : Ignition 2.14.0 May 13 00:41:32.866751 ignition[876]: INFO : Stage: umount May 13 00:41:32.866751 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:32.866751 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:32.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.856156 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:41:32.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.873522 ignition[876]: INFO : umount: umount passed May 13 00:41:32.873522 ignition[876]: INFO : Ignition finished successfully May 13 00:41:32.856237 systemd[1]: Stopped ignition-files.service. May 13 00:41:32.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.858684 systemd[1]: Stopping ignition-mount.service... May 13 00:41:32.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.859733 systemd[1]: Stopping iscsiuio.service... May 13 00:41:32.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.861035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:41:32.861188 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:41:32.864094 systemd[1]: Stopping sysroot-boot.service... May 13 00:41:32.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.865136 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:41:32.865293 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:41:32.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.866310 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:41:32.866422 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:41:32.868968 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:41:32.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.869044 systemd[1]: Stopped iscsiuio.service. May 13 00:41:32.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.870285 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:41:32.870347 systemd[1]: Stopped ignition-mount.service. May 13 00:41:32.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.872935 systemd[1]: Stopped target network.target. May 13 00:41:32.874301 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:41:32.874330 systemd[1]: Closed iscsiuio.socket. May 13 00:41:32.875028 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:41:32.875060 systemd[1]: Stopped ignition-disks.service. May 13 00:41:32.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.877242 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:41:32.877273 systemd[1]: Stopped ignition-kargs.service. May 13 00:41:32.878718 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:41:32.878749 systemd[1]: Stopped ignition-setup.service. May 13 00:41:32.880285 systemd[1]: Stopping systemd-networkd.service... May 13 00:41:32.910000 audit: BPF prog-id=6 op=UNLOAD May 13 00:41:32.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.881727 systemd[1]: Stopping systemd-resolved.service... May 13 00:41:32.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.882903 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:41:32.882972 systemd[1]: Finished initrd-cleanup.service. May 13 00:41:32.884611 systemd-networkd[721]: eth0: DHCPv6 lease lost May 13 00:41:32.917000 audit: BPF prog-id=9 op=UNLOAD May 13 00:41:32.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.885945 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:41:32.886041 systemd[1]: Stopped systemd-networkd.service. May 13 00:41:32.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.887673 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:41:32.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.887704 systemd[1]: Closed systemd-networkd.socket. May 13 00:41:32.890193 systemd[1]: Stopping network-cleanup.service... May 13 00:41:32.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.892826 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:41:32.892907 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:41:32.893010 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:41:32.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:32.893040 systemd[1]: Stopped systemd-sysctl.service. May 13 00:41:32.896851 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:41:32.896887 systemd[1]: Stopped systemd-modules-load.service. May 13 00:41:32.898820 systemd[1]: Stopping systemd-udevd.service... May 13 00:41:32.903916 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:41:32.904382 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:41:32.904473 systemd[1]: Stopped systemd-resolved.service. May 13 00:41:32.910715 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:41:32.910845 systemd[1]: Stopped systemd-udevd.service. May 13 00:41:32.912566 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:41:32.912662 systemd[1]: Stopped network-cleanup.service. May 13 00:41:32.914479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:41:32.914513 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:41:32.916268 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:41:32.916293 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:41:32.917922 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:41:32.917954 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:41:32.918829 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:41:32.918861 systemd[1]: Stopped dracut-cmdline.service. May 13 00:41:32.920435 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:41:32.920471 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:41:32.923098 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:41:32.924033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:41:32.924074 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:41:32.925938 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:41:32.926335 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:41:32.926410 systemd[1]: Stopped sysroot-boot.service. May 13 00:41:32.927737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:41:32.927813 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:41:32.929431 systemd[1]: Reached target initrd-switch-root.target. May 13 00:41:32.931308 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:41:32.931344 systemd[1]: Stopped initrd-setup-root.service. May 13 00:41:32.933564 systemd[1]: Starting initrd-switch-root.service... May 13 00:41:32.949545 systemd[1]: Switching root. May 13 00:41:32.968132 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 13 00:41:32.968170 iscsid[726]: iscsid shutting down. May 13 00:41:32.968941 systemd-journald[198]: Journal stopped May 13 00:41:35.669757 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:41:35.669812 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:41:35.669823 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:41:35.669833 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:41:35.669848 kernel: SELinux: policy capability open_perms=1 May 13 00:41:35.669858 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:41:35.669870 kernel: SELinux: policy capability always_check_network=0 May 13 00:41:35.669879 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:41:35.669891 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:41:35.669904 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:41:35.669914 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:41:35.669925 systemd[1]: Successfully loaded SELinux policy in 37.768ms. May 13 00:41:35.669948 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.497ms. May 13 00:41:35.669960 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:35.669971 systemd[1]: Detected virtualization kvm. May 13 00:41:35.669984 systemd[1]: Detected architecture x86-64. May 13 00:41:35.669995 systemd[1]: Detected first boot. May 13 00:41:35.670005 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:35.670016 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:41:35.670027 systemd[1]: Populated /etc with preset unit settings. May 13 00:41:35.670037 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:35.670049 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:35.670062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:35.670075 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:41:35.670085 systemd[1]: Stopped iscsid.service. May 13 00:41:35.670101 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:41:35.670115 systemd[1]: Stopped initrd-switch-root.service. May 13 00:41:35.670127 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:41:35.670138 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:41:35.670150 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:41:35.670160 systemd[1]: Created slice system-getty.slice. May 13 00:41:35.670170 systemd[1]: Created slice system-modprobe.slice. May 13 00:41:35.670181 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:41:35.670191 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:41:35.670202 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:41:35.670213 systemd[1]: Created slice user.slice. May 13 00:41:35.670225 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:35.670236 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:41:35.670246 systemd[1]: Set up automount boot.automount. May 13 00:41:35.670256 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:41:35.670267 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:41:35.670277 systemd[1]: Stopped target initrd-fs.target. May 13 00:41:35.670289 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:41:35.670299 systemd[1]: Reached target integritysetup.target. May 13 00:41:35.670311 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:35.670322 systemd[1]: Reached target remote-fs.target. May 13 00:41:35.670332 systemd[1]: Reached target slices.target. May 13 00:41:35.670343 systemd[1]: Reached target swap.target. May 13 00:41:35.670354 systemd[1]: Reached target torcx.target. May 13 00:41:35.670365 systemd[1]: Reached target veritysetup.target. May 13 00:41:35.670377 systemd[1]: Listening on systemd-coredump.socket. May 13 00:41:35.670387 systemd[1]: Listening on systemd-initctl.socket. May 13 00:41:35.670398 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:35.670410 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:35.670420 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:35.670431 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:41:35.670442 systemd[1]: Mounting dev-hugepages.mount... May 13 00:41:35.670453 systemd[1]: Mounting dev-mqueue.mount... May 13 00:41:35.670463 systemd[1]: Mounting media.mount... May 13 00:41:35.670474 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:35.670484 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:41:35.670496 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:41:35.670511 systemd[1]: Mounting tmp.mount... May 13 00:41:35.670522 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:41:35.670533 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:35.670544 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:35.670554 systemd[1]: Starting modprobe@configfs.service... May 13 00:41:35.670565 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:35.670606 systemd[1]: Starting modprobe@drm.service... May 13 00:41:35.670617 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:35.670628 systemd[1]: Starting modprobe@fuse.service... May 13 00:41:35.670650 systemd[1]: Starting modprobe@loop.service... May 13 00:41:35.670661 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:41:35.670672 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:41:35.670682 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:41:35.670699 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:41:35.670710 kernel: loop: module loaded May 13 00:41:35.670721 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:41:35.670732 kernel: fuse: init (API version 7.34) May 13 00:41:35.670742 systemd[1]: Stopped systemd-journald.service. May 13 00:41:35.670754 systemd[1]: Starting systemd-journald.service... May 13 00:41:35.670765 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:35.670776 systemd[1]: Starting systemd-network-generator.service... May 13 00:41:35.670786 systemd[1]: Starting systemd-remount-fs.service... May 13 00:41:35.670796 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:35.670808 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:41:35.670818 systemd[1]: Stopped verity-setup.service. May 13 00:41:35.670829 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:35.670842 systemd-journald[986]: Journal started May 13 00:41:35.670883 systemd-journald[986]: Runtime Journal (/run/log/journal/e152a106c2674e849075e524db863b43) is 6.0M, max 48.4M, 42.4M free. May 13 00:41:33.025000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:41:33.435000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:33.435000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:33.435000 audit: BPF prog-id=10 op=LOAD May 13 00:41:33.435000 audit: BPF prog-id=10 op=UNLOAD May 13 00:41:33.435000 audit: BPF prog-id=11 op=LOAD May 13 00:41:33.435000 audit: BPF prog-id=11 op=UNLOAD May 13 00:41:33.470000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:41:33.470000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558a2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:33.470000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:33.471000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:41:33.471000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155979 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:33.471000 audit: CWD cwd="/" May 13 00:41:33.471000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:33.471000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:33.471000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:35.538000 audit: BPF prog-id=12 op=LOAD May 13 00:41:35.538000 audit: BPF prog-id=3 op=UNLOAD May 13 00:41:35.538000 audit: BPF prog-id=13 op=LOAD May 13 00:41:35.538000 audit: BPF prog-id=14 op=LOAD May 13 00:41:35.538000 audit: BPF prog-id=4 op=UNLOAD May 13 00:41:35.538000 audit: BPF prog-id=5 op=UNLOAD May 13 00:41:35.539000 audit: BPF prog-id=15 op=LOAD May 13 00:41:35.539000 audit: BPF prog-id=12 op=UNLOAD May 13 00:41:35.539000 audit: BPF prog-id=16 op=LOAD May 13 00:41:35.539000 audit: BPF prog-id=17 op=LOAD May 13 00:41:35.539000 audit: BPF prog-id=13 op=UNLOAD May 13 00:41:35.539000 audit: BPF prog-id=14 op=UNLOAD May 13 00:41:35.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.552000 audit: BPF prog-id=15 op=UNLOAD May 13 00:41:35.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.650000 audit: BPF prog-id=18 op=LOAD May 13 00:41:35.650000 audit: BPF prog-id=19 op=LOAD May 13 00:41:35.650000 audit: BPF prog-id=20 op=LOAD May 13 00:41:35.650000 audit: BPF prog-id=16 op=UNLOAD May 13 00:41:35.650000 audit: BPF prog-id=17 op=UNLOAD May 13 00:41:35.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.667000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:41:35.667000 audit[986]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe27c93100 a2=4000 a3=7ffe27c9319c items=0 ppid=1 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:35.667000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:41:33.469341 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:35.536553 systemd[1]: Queued start job for default target multi-user.target. May 13 00:41:33.469547 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:41:35.536564 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:41:33.469563 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:41:35.540367 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:41:33.469610 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:41:33.469619 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:41:33.469647 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:41:33.469658 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:41:33.469843 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:41:35.672797 systemd[1]: Started systemd-journald.service. May 13 00:41:33.469878 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:41:33.469890 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:41:35.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:33.470430 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:41:33.470461 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:41:33.470476 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:41:33.470489 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:41:35.673242 systemd[1]: Mounted dev-hugepages.mount. May 13 00:41:33.470503 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:41:33.470515 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:41:35.261178 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:35.261780 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:35.261891 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:35.262067 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:35.262115 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:41:35.262173 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-05-13T00:41:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:41:35.674287 systemd[1]: Mounted dev-mqueue.mount. May 13 00:41:35.675148 systemd[1]: Mounted media.mount. May 13 00:41:35.676139 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:41:35.677321 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:41:35.678747 systemd[1]: Mounted tmp.mount. May 13 00:41:35.680012 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:35.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.681464 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:41:35.681684 systemd[1]: Finished modprobe@configfs.service. May 13 00:41:35.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.683292 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:41:35.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.684892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:35.685074 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:35.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.686609 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:35.686815 systemd[1]: Finished modprobe@drm.service. May 13 00:41:35.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.688217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:35.688441 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:35.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.689937 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:41:35.690134 systemd[1]: Finished modprobe@fuse.service. May 13 00:41:35.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.691612 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:35.691823 systemd[1]: Finished modprobe@loop.service. May 13 00:41:35.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.693257 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:35.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.694801 systemd[1]: Finished systemd-network-generator.service. May 13 00:41:35.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.696283 systemd[1]: Finished systemd-remount-fs.service. May 13 00:41:35.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.697563 systemd[1]: Reached target network-pre.target. May 13 00:41:35.699430 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:41:35.701430 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:41:35.702584 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:41:35.703935 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:41:35.705720 systemd[1]: Starting systemd-journal-flush.service... May 13 00:41:35.706811 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:35.710496 systemd-journald[986]: Time spent on flushing to /var/log/journal/e152a106c2674e849075e524db863b43 is 13.528ms for 1160 entries. May 13 00:41:35.710496 systemd-journald[986]: System Journal (/var/log/journal/e152a106c2674e849075e524db863b43) is 8.0M, max 195.6M, 187.6M free. May 13 00:41:35.746185 systemd-journald[986]: Received client request to flush runtime journal. May 13 00:41:35.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:35.707743 systemd[1]: Starting systemd-random-seed.service... May 13 00:41:35.708802 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:35.709961 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:35.713551 systemd[1]: Starting systemd-sysusers.service... May 13 00:41:35.747379 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:41:35.717339 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:41:35.718771 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:41:35.720156 systemd[1]: Finished systemd-random-seed.service. May 13 00:41:35.721950 systemd[1]: Reached target first-boot-complete.target. May 13 00:41:35.728038 systemd[1]: Finished systemd-sysusers.service. May 13 00:41:35.729151 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:35.730223 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:35.732200 systemd[1]: Starting systemd-udev-settle.service... May 13 00:41:35.746939 systemd[1]: Finished systemd-journal-flush.service. May 13 00:41:35.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.209036 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:41:36.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.210000 audit: BPF prog-id=21 op=LOAD May 13 00:41:36.210000 audit: BPF prog-id=22 op=LOAD May 13 00:41:36.210000 audit: BPF prog-id=7 op=UNLOAD May 13 00:41:36.210000 audit: BPF prog-id=8 op=UNLOAD May 13 00:41:36.211844 systemd[1]: Starting systemd-udevd.service... May 13 00:41:36.228073 systemd-udevd[1015]: Using default interface naming scheme 'v252'. May 13 00:41:36.241431 systemd[1]: Started systemd-udevd.service. May 13 00:41:36.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.243000 audit: BPF prog-id=23 op=LOAD May 13 00:41:36.247360 systemd[1]: Starting systemd-networkd.service... May 13 00:41:36.253000 audit: BPF prog-id=24 op=LOAD May 13 00:41:36.253000 audit: BPF prog-id=25 op=LOAD May 13 00:41:36.253000 audit: BPF prog-id=26 op=LOAD May 13 00:41:36.255094 systemd[1]: Starting systemd-userdbd.service... May 13 00:41:36.268600 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 00:41:36.281614 systemd[1]: Started systemd-userdbd.service. May 13 00:41:36.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.303789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:36.311607 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:41:36.321621 kernel: ACPI: button: Power Button [PWRF] May 13 00:41:36.325732 systemd-networkd[1026]: lo: Link UP May 13 00:41:36.325744 systemd-networkd[1026]: lo: Gained carrier May 13 00:41:36.326115 systemd-networkd[1026]: Enumeration completed May 13 00:41:36.326219 systemd[1]: Started systemd-networkd.service. May 13 00:41:36.326386 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:36.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.327898 systemd-networkd[1026]: eth0: Link UP May 13 00:41:36.327909 systemd-networkd[1026]: eth0: Gained carrier May 13 00:41:36.327000 audit[1042]: AVC avc: denied { confidentiality } for pid=1042 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:41:36.340741 systemd-networkd[1026]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:36.327000 audit[1042]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5589e78c6890 a1=338ac a2=7f724fcd3bc5 a3=5 items=110 ppid=1015 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:36.327000 audit: CWD cwd="/" May 13 00:41:36.327000 audit: PATH item=0 name=(null) inode=1040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=1 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=2 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=3 name=(null) inode=14653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=4 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=5 name=(null) inode=14654 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=6 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=7 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=8 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=9 name=(null) inode=14656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=10 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=11 name=(null) inode=14657 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=12 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=13 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=14 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=15 name=(null) inode=14659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=16 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=17 name=(null) inode=14660 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=18 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=19 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=20 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=21 name=(null) inode=14662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=22 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=23 name=(null) inode=14663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=24 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=25 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=26 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=27 name=(null) inode=14665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=28 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=29 name=(null) inode=14666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=30 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=31 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=32 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=33 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=34 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=35 name=(null) inode=14669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=36 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=37 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=38 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=39 name=(null) inode=14671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=40 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=41 name=(null) inode=14672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=42 name=(null) inode=14652 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=43 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=44 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=45 name=(null) inode=14674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=46 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=47 name=(null) inode=14675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=48 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=49 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=50 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=51 name=(null) inode=14677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=52 name=(null) inode=14673 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=53 name=(null) inode=14678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=54 name=(null) inode=1040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=55 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=56 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=57 name=(null) inode=14680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=58 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=59 name=(null) inode=14681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=60 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=61 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=62 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=63 name=(null) inode=14683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=64 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=65 name=(null) inode=14684 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=66 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=67 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=68 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=69 name=(null) inode=14686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=70 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=71 name=(null) inode=14687 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=72 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=73 name=(null) inode=14688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=74 name=(null) inode=14688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=75 name=(null) inode=14689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=76 name=(null) inode=14688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=77 name=(null) inode=14690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=78 name=(null) inode=14688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=79 name=(null) inode=14691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=80 name=(null) inode=14688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=81 name=(null) inode=14692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=82 name=(null) inode=14688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=83 name=(null) inode=14693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=84 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=85 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=86 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=87 name=(null) inode=14695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=88 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=89 name=(null) inode=14696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=90 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=91 name=(null) inode=14697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=92 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=93 name=(null) inode=14698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=94 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=95 name=(null) inode=14699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=96 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=97 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=98 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=99 name=(null) inode=14701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=100 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=101 name=(null) inode=14702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=102 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=103 name=(null) inode=14703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=104 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=105 name=(null) inode=14704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=106 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=107 name=(null) inode=14705 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PATH item=109 name=(null) inode=14706 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:36.327000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:41:36.359070 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 00:41:36.362991 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:41:36.363156 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:41:36.363279 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:41:36.366610 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:41:36.375598 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:41:36.415993 kernel: kvm: Nested Virtualization enabled May 13 00:41:36.416040 kernel: SVM: kvm: Nested Paging enabled May 13 00:41:36.416054 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:41:36.416667 kernel: SVM: Virtual GIF supported May 13 00:41:36.431607 kernel: EDAC MC: Ver: 3.0.0 May 13 00:41:36.458931 systemd[1]: Finished systemd-udev-settle.service. May 13 00:41:36.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.461065 systemd[1]: Starting lvm2-activation-early.service... May 13 00:41:36.468505 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:36.496173 systemd[1]: Finished lvm2-activation-early.service. May 13 00:41:36.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.497166 systemd[1]: Reached target cryptsetup.target. May 13 00:41:36.498856 systemd[1]: Starting lvm2-activation.service... May 13 00:41:36.501942 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:36.526199 systemd[1]: Finished lvm2-activation.service. May 13 00:41:36.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.527136 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:36.527993 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:41:36.528019 systemd[1]: Reached target local-fs.target. May 13 00:41:36.528830 systemd[1]: Reached target machines.target. May 13 00:41:36.530529 systemd[1]: Starting ldconfig.service... May 13 00:41:36.531469 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:36.531514 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:36.532387 systemd[1]: Starting systemd-boot-update.service... May 13 00:41:36.534230 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:41:36.536326 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:41:36.538158 systemd[1]: Starting systemd-sysext.service... May 13 00:41:36.539363 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1055 (bootctl) May 13 00:41:36.540630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:41:36.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.543857 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:41:36.553180 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:41:36.556800 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:41:36.556936 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:41:36.567610 kernel: loop0: detected capacity change from 0 to 210664 May 13 00:41:36.578793 systemd-fsck[1062]: fsck.fat 4.2 (2021-01-31) May 13 00:41:36.578793 systemd-fsck[1062]: /dev/vda1: 791 files, 120712/258078 clusters May 13 00:41:36.580834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:41:36.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.583621 systemd[1]: Mounting boot.mount... May 13 00:41:36.808525 systemd[1]: Mounted boot.mount. May 13 00:41:36.813106 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:41:36.813672 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:41:36.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.817689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:41:36.820093 systemd[1]: Finished systemd-boot-update.service. May 13 00:41:36.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.831605 kernel: loop1: detected capacity change from 0 to 210664 May 13 00:41:36.835841 (sd-sysext)[1068]: Using extensions 'kubernetes'. May 13 00:41:36.836174 (sd-sysext)[1068]: Merged extensions into '/usr'. May 13 00:41:36.851259 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:36.852561 systemd[1]: Mounting usr-share-oem.mount... May 13 00:41:36.853438 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:36.854527 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:36.856238 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:36.858379 systemd[1]: Starting modprobe@loop.service... May 13 00:41:36.859235 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:36.859378 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:36.859515 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:36.862067 systemd[1]: Mounted usr-share-oem.mount. May 13 00:41:36.863307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:36.863438 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:36.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.864859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:36.864993 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:36.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.866319 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:36.866450 systemd[1]: Finished modprobe@loop.service. May 13 00:41:36.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.867677 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:36.867782 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:36.868764 systemd[1]: Finished systemd-sysext.service. May 13 00:41:36.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:36.870881 systemd[1]: Starting ensure-sysext.service... May 13 00:41:36.872735 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:41:36.878621 systemd[1]: Reloading. May 13 00:41:36.885735 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:41:36.887674 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:41:36.887915 ldconfig[1054]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:41:36.891557 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:41:36.935370 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2025-05-13T00:41:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:36.935699 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2025-05-13T00:41:36Z" level=info msg="torcx already run" May 13 00:41:36.988054 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:36.988072 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:37.004615 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:37.055000 audit: BPF prog-id=27 op=LOAD May 13 00:41:37.055000 audit: BPF prog-id=18 op=UNLOAD May 13 00:41:37.055000 audit: BPF prog-id=28 op=LOAD May 13 00:41:37.055000 audit: BPF prog-id=29 op=LOAD May 13 00:41:37.055000 audit: BPF prog-id=19 op=UNLOAD May 13 00:41:37.055000 audit: BPF prog-id=20 op=UNLOAD May 13 00:41:37.056000 audit: BPF prog-id=30 op=LOAD May 13 00:41:37.056000 audit: BPF prog-id=31 op=LOAD May 13 00:41:37.056000 audit: BPF prog-id=21 op=UNLOAD May 13 00:41:37.056000 audit: BPF prog-id=22 op=UNLOAD May 13 00:41:37.057000 audit: BPF prog-id=32 op=LOAD May 13 00:41:37.057000 audit: BPF prog-id=24 op=UNLOAD May 13 00:41:37.057000 audit: BPF prog-id=33 op=LOAD May 13 00:41:37.057000 audit: BPF prog-id=34 op=LOAD May 13 00:41:37.057000 audit: BPF prog-id=25 op=UNLOAD May 13 00:41:37.057000 audit: BPF prog-id=26 op=UNLOAD May 13 00:41:37.058000 audit: BPF prog-id=35 op=LOAD May 13 00:41:37.058000 audit: BPF prog-id=23 op=UNLOAD May 13 00:41:37.061553 systemd[1]: Finished ldconfig.service. May 13 00:41:37.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.063472 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:41:37.067032 systemd[1]: Starting audit-rules.service... May 13 00:41:37.068944 systemd[1]: Starting clean-ca-certificates.service... May 13 00:41:37.071062 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:41:37.072000 audit: BPF prog-id=36 op=LOAD May 13 00:41:37.074000 audit: BPF prog-id=37 op=LOAD May 13 00:41:37.073603 systemd[1]: Starting systemd-resolved.service... May 13 00:41:37.075810 systemd[1]: Starting systemd-timesyncd.service... May 13 00:41:37.078193 systemd[1]: Starting systemd-update-utmp.service... May 13 00:41:37.081375 systemd[1]: Finished clean-ca-certificates.service. May 13 00:41:37.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.082000 audit[1148]: SYSTEM_BOOT pid=1148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:41:37.082863 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:37.085748 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:37.085977 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:37.087206 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:37.089138 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:37.090986 systemd[1]: Starting modprobe@loop.service... May 13 00:41:37.091742 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:37.091896 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:37.092031 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:37.092145 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:37.093435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:37.093607 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:37.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.094919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:37.095029 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:37.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.096371 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:41:37.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.097777 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:37.097882 systemd[1]: Finished modprobe@loop.service. May 13 00:41:37.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.099070 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:37.099208 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:37.100288 systemd[1]: Starting systemd-update-done.service... May 13 00:41:37.101623 systemd[1]: Finished systemd-update-utmp.service. May 13 00:41:37.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.104348 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:37.104520 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:37.105549 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:37.107368 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:37.109237 systemd[1]: Starting modprobe@loop.service... May 13 00:41:37.109987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:37.110089 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:37.110171 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:37.110241 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:37.111071 systemd[1]: Finished systemd-update-done.service. May 13 00:41:37.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.112242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:37.112349 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:37.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.113534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:37.113659 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:37.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.114916 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:37.115014 systemd[1]: Finished modprobe@loop.service. May 13 00:41:37.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.116075 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:37.116162 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:37.118304 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:37.118504 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:37.119770 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:37.121727 systemd[1]: Starting modprobe@drm.service... May 13 00:41:37.123835 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:37.125822 systemd[1]: Starting modprobe@loop.service... May 13 00:41:37.126671 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:37.126801 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:37.128165 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:41:37.129074 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:37.129178 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:37.130215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:37.130323 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:37.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:37.132000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:41:37.132000 audit[1166]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc3b68590 a2=420 a3=0 items=0 ppid=1137 pid=1166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:37.132000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:41:37.132880 augenrules[1166]: No rules May 13 00:41:37.132857 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:37.132957 systemd[1]: Finished modprobe@drm.service. May 13 00:41:37.134122 systemd[1]: Finished audit-rules.service. May 13 00:41:37.135179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:37.135284 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:37.136426 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:37.136528 systemd[1]: Finished modprobe@loop.service. May 13 00:41:37.137922 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:37.138006 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:37.138765 systemd[1]: Started systemd-timesyncd.service. May 13 00:41:37.139941 systemd-timesyncd[1147]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:41:37.139986 systemd-timesyncd[1147]: Initial clock synchronization to Tue 2025-05-13 00:41:36.844454 UTC. May 13 00:41:37.140098 systemd[1]: Finished ensure-sysext.service. May 13 00:41:37.141071 systemd[1]: Reached target time-set.target. May 13 00:41:37.142417 systemd-resolved[1141]: Positive Trust Anchors: May 13 00:41:37.142432 systemd-resolved[1141]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:37.142459 systemd-resolved[1141]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:37.149137 systemd-resolved[1141]: Defaulting to hostname 'linux'. May 13 00:41:37.150528 systemd[1]: Started systemd-resolved.service. May 13 00:41:37.151490 systemd[1]: Reached target network.target. May 13 00:41:37.152325 systemd[1]: Reached target nss-lookup.target. May 13 00:41:37.153202 systemd[1]: Reached target sysinit.target. May 13 00:41:37.154112 systemd[1]: Started motdgen.path. May 13 00:41:37.154892 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:41:37.156148 systemd[1]: Started logrotate.timer. May 13 00:41:37.157031 systemd[1]: Started mdadm.timer. May 13 00:41:37.157779 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:41:37.158688 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:41:37.158713 systemd[1]: Reached target paths.target. May 13 00:41:37.159499 systemd[1]: Reached target timers.target. May 13 00:41:37.160617 systemd[1]: Listening on dbus.socket. May 13 00:41:37.162364 systemd[1]: Starting docker.socket... May 13 00:41:37.165170 systemd[1]: Listening on sshd.socket. May 13 00:41:37.166080 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:37.166413 systemd[1]: Listening on docker.socket. May 13 00:41:37.167285 systemd[1]: Reached target sockets.target. May 13 00:41:37.168122 systemd[1]: Reached target basic.target. May 13 00:41:37.168977 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:37.169002 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:37.169853 systemd[1]: Starting containerd.service... May 13 00:41:37.171357 systemd[1]: Starting dbus.service... May 13 00:41:37.172968 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:41:37.174802 systemd[1]: Starting extend-filesystems.service... May 13 00:41:37.175797 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:41:37.176743 systemd[1]: Starting motdgen.service... May 13 00:41:37.177292 jq[1179]: false May 13 00:41:37.178373 systemd[1]: Starting prepare-helm.service... May 13 00:41:37.179992 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:41:37.181705 systemd[1]: Starting sshd-keygen.service... May 13 00:41:37.184539 systemd[1]: Starting systemd-logind.service... May 13 00:41:37.186870 extend-filesystems[1180]: Found loop1 May 13 00:41:37.186870 extend-filesystems[1180]: Found sr0 May 13 00:41:37.186870 extend-filesystems[1180]: Found vda May 13 00:41:37.186870 extend-filesystems[1180]: Found vda1 May 13 00:41:37.186870 extend-filesystems[1180]: Found vda2 May 13 00:41:37.186870 extend-filesystems[1180]: Found vda3 May 13 00:41:37.186870 extend-filesystems[1180]: Found usr May 13 00:41:37.186870 extend-filesystems[1180]: Found vda4 May 13 00:41:37.186870 extend-filesystems[1180]: Found vda6 May 13 00:41:37.186870 extend-filesystems[1180]: Found vda7 May 13 00:41:37.186870 extend-filesystems[1180]: Found vda9 May 13 00:41:37.186870 extend-filesystems[1180]: Checking size of /dev/vda9 May 13 00:41:37.185359 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:37.205814 extend-filesystems[1180]: Resized partition /dev/vda9 May 13 00:41:37.185410 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:41:37.185764 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:41:37.207460 jq[1199]: true May 13 00:41:37.186694 systemd[1]: Starting update-engine.service... May 13 00:41:37.192568 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:41:37.208003 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:41:37.208187 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:41:37.208456 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:41:37.212327 systemd[1]: Finished motdgen.service. May 13 00:41:37.213852 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:41:37.214010 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:41:37.219407 systemd[1]: Started dbus.service. May 13 00:41:37.219120 dbus-daemon[1178]: [system] SELinux support is enabled May 13 00:41:37.229267 tar[1206]: linux-amd64/helm May 13 00:41:37.229436 extend-filesystems[1203]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:41:37.231812 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:41:37.226480 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:41:37.231946 jq[1207]: true May 13 00:41:37.226521 systemd[1]: Reached target system-config.target. May 13 00:41:37.232885 update_engine[1192]: I0513 00:41:37.232018 1192 main.cc:92] Flatcar Update Engine starting May 13 00:41:37.229007 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:41:37.229024 systemd[1]: Reached target user-config.target. May 13 00:41:37.234475 update_engine[1192]: I0513 00:41:37.234369 1192 update_check_scheduler.cc:74] Next update check in 7m9s May 13 00:41:37.234332 systemd[1]: Started update-engine.service. May 13 00:41:37.237195 systemd[1]: Started locksmithd.service. May 13 00:41:37.253007 env[1208]: time="2025-05-13T00:41:37.252661489Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:41:37.259537 systemd-logind[1190]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:41:37.259569 systemd-logind[1190]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:41:37.261890 systemd-logind[1190]: New seat seat0. May 13 00:41:37.265230 systemd[1]: Started systemd-logind.service. May 13 00:41:37.285317 env[1208]: time="2025-05-13T00:41:37.285263732Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:41:37.285636 env[1208]: time="2025-05-13T00:41:37.285614710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:37.286603 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:41:37.288007 env[1208]: time="2025-05-13T00:41:37.287948017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:37.288007 env[1208]: time="2025-05-13T00:41:37.287995295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:37.313272 extend-filesystems[1203]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:41:37.313272 extend-filesystems[1203]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:41:37.313272 extend-filesystems[1203]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:41:37.318677 extend-filesystems[1180]: Resized filesystem in /dev/vda9 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.314106112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.314156456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.314176935Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.314189829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.314323309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.314674207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.315208961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.315239578Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.315287368Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:41:37.319758 env[1208]: time="2025-05-13T00:41:37.315298058Z" level=info msg="metadata content store policy set" policy=shared May 13 00:41:37.313930 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:41:37.314095 systemd[1]: Finished extend-filesystems.service. May 13 00:41:37.318720 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:41:37.321895 bash[1232]: Updated "/home/core/.ssh/authorized_keys" May 13 00:41:37.322008 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:41:37.323319 env[1208]: time="2025-05-13T00:41:37.323301169Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:41:37.323420 env[1208]: time="2025-05-13T00:41:37.323402640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:41:37.323515 env[1208]: time="2025-05-13T00:41:37.323497958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:41:37.323677 env[1208]: time="2025-05-13T00:41:37.323659351Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:41:37.323775 env[1208]: time="2025-05-13T00:41:37.323757525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:41:37.323881 env[1208]: time="2025-05-13T00:41:37.323863995Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:41:37.323977 env[1208]: time="2025-05-13T00:41:37.323959865Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:41:37.324075 env[1208]: time="2025-05-13T00:41:37.324057458Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:41:37.324172 env[1208]: time="2025-05-13T00:41:37.324154820Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:41:37.324268 env[1208]: time="2025-05-13T00:41:37.324251331Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:41:37.324363 env[1208]: time="2025-05-13T00:41:37.324345959Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:41:37.324457 env[1208]: time="2025-05-13T00:41:37.324440326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:41:37.324636 env[1208]: time="2025-05-13T00:41:37.324620073Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:41:37.324808 env[1208]: time="2025-05-13T00:41:37.324792907Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:41:37.325098 env[1208]: time="2025-05-13T00:41:37.325081248Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:41:37.325190 env[1208]: time="2025-05-13T00:41:37.325172750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:41:37.325286 env[1208]: time="2025-05-13T00:41:37.325268519Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:41:37.325432 env[1208]: time="2025-05-13T00:41:37.325401789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:41:37.325525 env[1208]: time="2025-05-13T00:41:37.325507327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:41:37.325660 env[1208]: time="2025-05-13T00:41:37.325634275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:41:37.325757 env[1208]: time="2025-05-13T00:41:37.325739974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:41:37.325854 env[1208]: time="2025-05-13T00:41:37.325834792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:41:37.325954 env[1208]: time="2025-05-13T00:41:37.325936853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:41:37.326049 env[1208]: time="2025-05-13T00:41:37.326032522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:41:37.326144 env[1208]: time="2025-05-13T00:41:37.326127390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:41:37.326253 env[1208]: time="2025-05-13T00:41:37.326236255Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:41:37.326452 env[1208]: time="2025-05-13T00:41:37.326435919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:41:37.326538 env[1208]: time="2025-05-13T00:41:37.326521409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:41:37.326652 env[1208]: time="2025-05-13T00:41:37.326626446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:41:37.326741 env[1208]: time="2025-05-13T00:41:37.326723619Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:41:37.326830 env[1208]: time="2025-05-13T00:41:37.326809881Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:41:37.326903 env[1208]: time="2025-05-13T00:41:37.326884711Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:41:37.327005 env[1208]: time="2025-05-13T00:41:37.326986592Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:41:37.327104 env[1208]: time="2025-05-13T00:41:37.327085678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:41:37.327387 env[1208]: time="2025-05-13T00:41:37.327333964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:41:37.327987 env[1208]: time="2025-05-13T00:41:37.327504964Z" level=info msg="Connect containerd service" May 13 00:41:37.327987 env[1208]: time="2025-05-13T00:41:37.327539970Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:41:37.328479 env[1208]: time="2025-05-13T00:41:37.328458723Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:41:37.328663 env[1208]: time="2025-05-13T00:41:37.328615397Z" level=info msg="Start subscribing containerd event" May 13 00:41:37.328720 env[1208]: time="2025-05-13T00:41:37.328676051Z" level=info msg="Start recovering state" May 13 00:41:37.328754 env[1208]: time="2025-05-13T00:41:37.328722879Z" level=info msg="Start event monitor" May 13 00:41:37.328754 env[1208]: time="2025-05-13T00:41:37.328731094Z" level=info msg="Start snapshots syncer" May 13 00:41:37.328754 env[1208]: time="2025-05-13T00:41:37.328739430Z" level=info msg="Start cni network conf syncer for default" May 13 00:41:37.328754 env[1208]: time="2025-05-13T00:41:37.328745842Z" level=info msg="Start streaming server" May 13 00:41:37.328972 env[1208]: time="2025-05-13T00:41:37.328955836Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:41:37.329093 env[1208]: time="2025-05-13T00:41:37.329068948Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:41:37.329272 systemd[1]: Started containerd.service. May 13 00:41:37.329429 env[1208]: time="2025-05-13T00:41:37.329413654Z" level=info msg="containerd successfully booted in 0.077410s" May 13 00:41:37.548203 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:41:37.568331 systemd[1]: Finished sshd-keygen.service. May 13 00:41:37.570683 systemd[1]: Starting issuegen.service... May 13 00:41:37.575419 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:41:37.575548 systemd[1]: Finished issuegen.service. May 13 00:41:37.586791 systemd[1]: Starting systemd-user-sessions.service... May 13 00:41:37.592180 systemd[1]: Finished systemd-user-sessions.service. May 13 00:41:37.594441 systemd[1]: Started getty@tty1.service. May 13 00:41:37.596746 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:41:37.597812 systemd[1]: Reached target getty.target. May 13 00:41:37.627871 tar[1206]: linux-amd64/LICENSE May 13 00:41:37.627959 tar[1206]: linux-amd64/README.md May 13 00:41:37.631656 systemd[1]: Finished prepare-helm.service. May 13 00:41:38.324754 systemd-networkd[1026]: eth0: Gained IPv6LL May 13 00:41:38.326447 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:41:38.327725 systemd[1]: Reached target network-online.target. May 13 00:41:38.329982 systemd[1]: Starting kubelet.service... May 13 00:41:38.877656 systemd[1]: Started kubelet.service. May 13 00:41:38.879228 systemd[1]: Reached target multi-user.target. May 13 00:41:38.881345 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:41:38.887687 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:41:38.887844 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:41:38.889023 systemd[1]: Startup finished in 765ms (kernel) + 6.199s (initrd) + 5.902s (userspace) = 12.867s. May 13 00:41:39.287762 kubelet[1260]: E0513 00:41:39.287653 1260 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:39.289124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:39.289242 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:39.454423 systemd[1]: Created slice system-sshd.slice. May 13 00:41:39.455451 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:34230.service. May 13 00:41:39.494965 sshd[1270]: Accepted publickey for core from 10.0.0.1 port 34230 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:39.496380 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:39.504919 systemd-logind[1190]: New session 1 of user core. May 13 00:41:39.505966 systemd[1]: Created slice user-500.slice. May 13 00:41:39.507098 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:41:39.514847 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:41:39.516425 systemd[1]: Starting user@500.service... May 13 00:41:39.518889 (systemd)[1273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:39.583033 systemd[1273]: Queued start job for default target default.target. May 13 00:41:39.583429 systemd[1273]: Reached target paths.target. May 13 00:41:39.583453 systemd[1273]: Reached target sockets.target. May 13 00:41:39.583468 systemd[1273]: Reached target timers.target. May 13 00:41:39.583482 systemd[1273]: Reached target basic.target. May 13 00:41:39.583526 systemd[1273]: Reached target default.target. May 13 00:41:39.583567 systemd[1273]: Startup finished in 60ms. May 13 00:41:39.583606 systemd[1]: Started user@500.service. May 13 00:41:39.584409 systemd[1]: Started session-1.scope. May 13 00:41:39.632841 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:34232.service. May 13 00:41:39.671348 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 34232 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:39.672509 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:39.675955 systemd-logind[1190]: New session 2 of user core. May 13 00:41:39.677043 systemd[1]: Started session-2.scope. May 13 00:41:39.727491 sshd[1282]: pam_unix(sshd:session): session closed for user core May 13 00:41:39.729954 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:34232.service: Deactivated successfully. May 13 00:41:39.730537 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:41:39.731079 systemd-logind[1190]: Session 2 logged out. Waiting for processes to exit. May 13 00:41:39.732182 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:34244.service. May 13 00:41:39.732923 systemd-logind[1190]: Removed session 2. May 13 00:41:39.768940 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 34244 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:39.769906 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:39.772683 systemd-logind[1190]: New session 3 of user core. May 13 00:41:39.773360 systemd[1]: Started session-3.scope. May 13 00:41:39.819759 sshd[1288]: pam_unix(sshd:session): session closed for user core May 13 00:41:39.822150 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:34244.service: Deactivated successfully. May 13 00:41:39.822672 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:41:39.823118 systemd-logind[1190]: Session 3 logged out. Waiting for processes to exit. May 13 00:41:39.824187 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:34258.service. May 13 00:41:39.824997 systemd-logind[1190]: Removed session 3. May 13 00:41:39.861335 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 34258 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:39.862295 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:39.865495 systemd-logind[1190]: New session 4 of user core. May 13 00:41:39.866250 systemd[1]: Started session-4.scope. May 13 00:41:39.917419 sshd[1294]: pam_unix(sshd:session): session closed for user core May 13 00:41:39.919611 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:34258.service: Deactivated successfully. May 13 00:41:39.920065 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:41:39.920469 systemd-logind[1190]: Session 4 logged out. Waiting for processes to exit. May 13 00:41:39.921307 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:34270.service. May 13 00:41:39.921872 systemd-logind[1190]: Removed session 4. May 13 00:41:39.960551 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 34270 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:39.961714 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:39.965040 systemd-logind[1190]: New session 5 of user core. May 13 00:41:39.965772 systemd[1]: Started session-5.scope. May 13 00:41:40.018759 sudo[1303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:41:40.018947 sudo[1303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:41:40.040848 systemd[1]: Starting docker.service... May 13 00:41:40.069667 env[1314]: time="2025-05-13T00:41:40.069613869Z" level=info msg="Starting up" May 13 00:41:40.071007 env[1314]: time="2025-05-13T00:41:40.070965786Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:40.071007 env[1314]: time="2025-05-13T00:41:40.070991429Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:40.071007 env[1314]: time="2025-05-13T00:41:40.071011329Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:40.071183 env[1314]: time="2025-05-13T00:41:40.071022378Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:40.072569 env[1314]: time="2025-05-13T00:41:40.072538144Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:40.072569 env[1314]: time="2025-05-13T00:41:40.072561761Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:40.072682 env[1314]: time="2025-05-13T00:41:40.072593714Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:40.072682 env[1314]: time="2025-05-13T00:41:40.072602388Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:40.709146 env[1314]: time="2025-05-13T00:41:40.709100041Z" level=info msg="Loading containers: start." May 13 00:41:40.831605 kernel: Initializing XFRM netlink socket May 13 00:41:40.859038 env[1314]: time="2025-05-13T00:41:40.858973920Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:41:40.912210 systemd-networkd[1026]: docker0: Link UP May 13 00:41:40.926947 env[1314]: time="2025-05-13T00:41:40.926900710Z" level=info msg="Loading containers: done." May 13 00:41:40.937832 env[1314]: time="2025-05-13T00:41:40.937789738Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:41:40.937967 env[1314]: time="2025-05-13T00:41:40.937951192Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:41:40.938054 env[1314]: time="2025-05-13T00:41:40.938030177Z" level=info msg="Daemon has completed initialization" May 13 00:41:40.958861 systemd[1]: Started docker.service. May 13 00:41:40.967028 env[1314]: time="2025-05-13T00:41:40.966741164Z" level=info msg="API listen on /run/docker.sock" May 13 00:41:41.648970 env[1208]: time="2025-05-13T00:41:41.648914405Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:41:43.093421 env[1208]: time="2025-05-13T00:41:43.093347708Z" level=error msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.30.12\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-us-west-1.s3.dualstack.us-west-1.amazonaws.com/containers/images/sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\": dial tcp: lookup prod-registry-k8s-io-us-west-1.s3.dualstack.us-west-1.amazonaws.com: no such host" May 13 00:41:43.104101 env[1208]: time="2025-05-13T00:41:43.104070380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:41:44.063972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962332155.mount: Deactivated successfully. May 13 00:41:46.701552 env[1208]: time="2025-05-13T00:41:46.701489309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:46.813433 env[1208]: time="2025-05-13T00:41:46.813374235Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:46.867847 env[1208]: time="2025-05-13T00:41:46.867797573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:46.911636 env[1208]: time="2025-05-13T00:41:46.911603411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:46.912553 env[1208]: time="2025-05-13T00:41:46.912496569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:41:46.922435 env[1208]: time="2025-05-13T00:41:46.922396490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:41:49.540003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:41:49.540196 systemd[1]: Stopped kubelet.service. May 13 00:41:49.541495 systemd[1]: Starting kubelet.service... May 13 00:41:49.621418 systemd[1]: Started kubelet.service. May 13 00:41:50.464610 kubelet[1468]: E0513 00:41:50.464535 1468 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:50.467551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:50.467730 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:51.607244 env[1208]: time="2025-05-13T00:41:51.607172936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:51.612813 env[1208]: time="2025-05-13T00:41:51.612767531Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:51.614818 env[1208]: time="2025-05-13T00:41:51.614772890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:51.616812 env[1208]: time="2025-05-13T00:41:51.616771691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:51.617701 env[1208]: time="2025-05-13T00:41:51.617663900Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:41:51.628789 env[1208]: time="2025-05-13T00:41:51.628745094Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:41:53.504266 env[1208]: time="2025-05-13T00:41:53.504201056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.506793 env[1208]: time="2025-05-13T00:41:53.506741633Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.508750 env[1208]: time="2025-05-13T00:41:53.508695951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.510538 env[1208]: time="2025-05-13T00:41:53.510507833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:53.511271 env[1208]: time="2025-05-13T00:41:53.511225661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:41:53.521181 env[1208]: time="2025-05-13T00:41:53.521150402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:41:54.607468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719179050.mount: Deactivated successfully. May 13 00:41:55.746923 env[1208]: time="2025-05-13T00:41:55.746844622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:55.749159 env[1208]: time="2025-05-13T00:41:55.749100182Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:55.751502 env[1208]: time="2025-05-13T00:41:55.751457873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:55.753720 env[1208]: time="2025-05-13T00:41:55.753659098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:55.753997 env[1208]: time="2025-05-13T00:41:55.753963137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:41:55.766008 env[1208]: time="2025-05-13T00:41:55.765957958Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:41:56.881755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824541318.mount: Deactivated successfully. May 13 00:41:58.805223 env[1208]: time="2025-05-13T00:41:58.805152657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:58.807276 env[1208]: time="2025-05-13T00:41:58.807212092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:58.809076 env[1208]: time="2025-05-13T00:41:58.809031812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:58.810930 env[1208]: time="2025-05-13T00:41:58.810889576Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:58.811650 env[1208]: time="2025-05-13T00:41:58.811621516Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:41:58.828613 env[1208]: time="2025-05-13T00:41:58.828558591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:41:59.316897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586575992.mount: Deactivated successfully. May 13 00:41:59.322400 env[1208]: time="2025-05-13T00:41:59.322357070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:59.324195 env[1208]: time="2025-05-13T00:41:59.324138465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:59.325608 env[1208]: time="2025-05-13T00:41:59.325568939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:59.326988 env[1208]: time="2025-05-13T00:41:59.326955368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:59.327426 env[1208]: time="2025-05-13T00:41:59.327392160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:41:59.341210 env[1208]: time="2025-05-13T00:41:59.341170812Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:41:59.839558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091483687.mount: Deactivated successfully. May 13 00:42:00.718530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:42:00.718764 systemd[1]: Stopped kubelet.service. May 13 00:42:00.720340 systemd[1]: Starting kubelet.service... May 13 00:42:00.795791 systemd[1]: Started kubelet.service. May 13 00:42:00.836282 kubelet[1514]: E0513 00:42:00.836231 1514 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:42:00.838245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:42:00.838419 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:42:02.926147 env[1208]: time="2025-05-13T00:42:02.926076597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:02.928366 env[1208]: time="2025-05-13T00:42:02.928315833Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:02.930235 env[1208]: time="2025-05-13T00:42:02.930180700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:02.931784 env[1208]: time="2025-05-13T00:42:02.931740009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:02.932462 env[1208]: time="2025-05-13T00:42:02.932412493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:42:05.275160 systemd[1]: Stopped kubelet.service. May 13 00:42:05.277134 systemd[1]: Starting kubelet.service... May 13 00:42:05.289375 systemd[1]: Reloading. May 13 00:42:05.359671 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2025-05-13T00:42:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:42:05.359980 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2025-05-13T00:42:05Z" level=info msg="torcx already run" May 13 00:42:05.659451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:42:05.659468 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:42:05.676040 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:42:05.754497 systemd[1]: Started kubelet.service. May 13 00:42:05.756128 systemd[1]: Stopping kubelet.service... May 13 00:42:05.756390 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:42:05.756553 systemd[1]: Stopped kubelet.service. May 13 00:42:05.758029 systemd[1]: Starting kubelet.service... May 13 00:42:05.832798 systemd[1]: Started kubelet.service. May 13 00:42:05.873943 kubelet[1672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:42:05.873943 kubelet[1672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:42:05.873943 kubelet[1672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:42:05.875000 kubelet[1672]: I0513 00:42:05.874939 1672 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:42:06.135877 kubelet[1672]: I0513 00:42:06.135826 1672 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:42:06.135877 kubelet[1672]: I0513 00:42:06.135857 1672 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:42:06.136080 kubelet[1672]: I0513 00:42:06.136044 1672 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:42:06.150202 kubelet[1672]: I0513 00:42:06.150150 1672 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:42:06.150695 kubelet[1672]: E0513 00:42:06.150661 1672 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.158868 kubelet[1672]: I0513 00:42:06.158843 1672 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:42:06.160049 kubelet[1672]: I0513 00:42:06.160018 1672 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:42:06.160212 kubelet[1672]: I0513 00:42:06.160045 1672 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:42:06.160303 kubelet[1672]: I0513 00:42:06.160218 1672 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:42:06.160303 kubelet[1672]: I0513 00:42:06.160226 1672 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:42:06.160348 kubelet[1672]: I0513 00:42:06.160318 1672 state_mem.go:36] "Initialized new in-memory state store" May 13 00:42:06.160892 kubelet[1672]: I0513 00:42:06.160878 1672 kubelet.go:400] "Attempting to sync node with API server" May 13 00:42:06.160924 kubelet[1672]: I0513 00:42:06.160892 1672 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:42:06.160924 kubelet[1672]: I0513 00:42:06.160908 1672 kubelet.go:312] "Adding apiserver pod source" May 13 00:42:06.160966 kubelet[1672]: I0513 00:42:06.160924 1672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:42:06.161482 kubelet[1672]: W0513 00:42:06.161422 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.161534 kubelet[1672]: E0513 00:42:06.161487 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.161534 kubelet[1672]: W0513 00:42:06.161424 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.161534 kubelet[1672]: E0513 00:42:06.161510 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.163719 kubelet[1672]: I0513 00:42:06.163701 1672 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:42:06.166678 kubelet[1672]: I0513 00:42:06.166631 1672 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:42:06.166678 kubelet[1672]: W0513 00:42:06.166682 1672 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:42:06.167187 kubelet[1672]: I0513 00:42:06.167169 1672 server.go:1264] "Started kubelet" May 13 00:42:06.169897 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:42:06.170531 kubelet[1672]: I0513 00:42:06.170032 1672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:42:06.175306 kubelet[1672]: I0513 00:42:06.175244 1672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:42:06.175436 kubelet[1672]: I0513 00:42:06.175394 1672 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:42:06.175493 kubelet[1672]: I0513 00:42:06.175474 1672 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:42:06.183407 kubelet[1672]: I0513 00:42:06.183345 1672 server.go:455] "Adding debug handlers to kubelet server" May 13 00:42:06.183678 kubelet[1672]: I0513 00:42:06.183661 1672 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:42:06.184187 kubelet[1672]: I0513 00:42:06.184171 1672 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:42:06.184410 kubelet[1672]: I0513 00:42:06.184397 1672 reconciler.go:26] "Reconciler: start to sync state" May 13 00:42:06.184982 kubelet[1672]: W0513 00:42:06.184949 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.185067 kubelet[1672]: E0513 00:42:06.185052 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.185944 kubelet[1672]: E0513 00:42:06.185899 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" May 13 00:42:06.186022 kubelet[1672]: E0513 00:42:06.185916 1672 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:42:06.186692 kubelet[1672]: E0513 00:42:06.186550 1672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef65d85ebae0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:42:06.167153376 +0000 UTC m=+0.330984787,LastTimestamp:2025-05-13 00:42:06.167153376 +0000 UTC m=+0.330984787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:42:06.188552 kubelet[1672]: I0513 00:42:06.188524 1672 factory.go:221] Registration of the containerd container factory successfully May 13 00:42:06.188552 kubelet[1672]: I0513 00:42:06.188540 1672 factory.go:221] Registration of the systemd container factory successfully May 13 00:42:06.188772 kubelet[1672]: I0513 00:42:06.188624 1672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:42:06.197407 kubelet[1672]: I0513 00:42:06.197377 1672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:42:06.198283 kubelet[1672]: I0513 00:42:06.198252 1672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:42:06.198283 kubelet[1672]: I0513 00:42:06.198275 1672 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:42:06.198393 kubelet[1672]: I0513 00:42:06.198290 1672 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:42:06.198393 kubelet[1672]: E0513 00:42:06.198322 1672 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:42:06.200591 kubelet[1672]: I0513 00:42:06.200549 1672 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:42:06.200591 kubelet[1672]: I0513 00:42:06.200568 1672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:42:06.200681 kubelet[1672]: I0513 00:42:06.200595 1672 state_mem.go:36] "Initialized new in-memory state store" May 13 00:42:06.200852 kubelet[1672]: W0513 00:42:06.200810 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.200948 kubelet[1672]: E0513 00:42:06.200932 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:06.287089 kubelet[1672]: I0513 00:42:06.287039 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:42:06.287424 kubelet[1672]: E0513 00:42:06.287390 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:42:06.298574 kubelet[1672]: E0513 00:42:06.298535 1672 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:42:06.386417 kubelet[1672]: E0513 00:42:06.386285 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" May 13 00:42:06.481027 kubelet[1672]: I0513 00:42:06.480971 1672 policy_none.go:49] "None policy: Start" May 13 00:42:06.481811 kubelet[1672]: I0513 00:42:06.481795 1672 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:42:06.481863 kubelet[1672]: I0513 00:42:06.481816 1672 state_mem.go:35] "Initializing new in-memory state store" May 13 00:42:06.488123 systemd[1]: Created slice kubepods.slice. May 13 00:42:06.489187 kubelet[1672]: I0513 00:42:06.488529 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:42:06.489187 kubelet[1672]: E0513 00:42:06.488845 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:42:06.491817 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:42:06.494133 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:42:06.498878 kubelet[1672]: E0513 00:42:06.498844 1672 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:42:06.504429 kubelet[1672]: I0513 00:42:06.504396 1672 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:42:06.504594 kubelet[1672]: I0513 00:42:06.504541 1672 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:42:06.504689 kubelet[1672]: I0513 00:42:06.504668 1672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:42:06.505708 kubelet[1672]: E0513 00:42:06.505694 1672 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:42:06.787814 kubelet[1672]: E0513 00:42:06.787762 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" May 13 00:42:06.890402 kubelet[1672]: I0513 00:42:06.890371 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:42:06.890810 kubelet[1672]: E0513 00:42:06.890781 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:42:06.899907 kubelet[1672]: I0513 00:42:06.899855 1672 topology_manager.go:215] "Topology Admit Handler" podUID="76ae7268833c1c38ae6899b8be3d5fb0" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:42:06.900618 kubelet[1672]: I0513 00:42:06.900596 1672 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:42:06.901180 kubelet[1672]: I0513 00:42:06.901159 1672 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:42:06.906099 systemd[1]: Created slice kubepods-burstable-pod76ae7268833c1c38ae6899b8be3d5fb0.slice. May 13 00:42:06.917562 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 00:42:06.925689 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 00:42:06.988587 kubelet[1672]: I0513 00:42:06.988536 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:06.988587 kubelet[1672]: I0513 00:42:06.988563 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:06.988721 kubelet[1672]: I0513 00:42:06.988595 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:06.988721 kubelet[1672]: I0513 00:42:06.988609 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76ae7268833c1c38ae6899b8be3d5fb0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ae7268833c1c38ae6899b8be3d5fb0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:42:06.988721 kubelet[1672]: I0513 00:42:06.988622 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76ae7268833c1c38ae6899b8be3d5fb0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ae7268833c1c38ae6899b8be3d5fb0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:42:06.988721 kubelet[1672]: I0513 00:42:06.988638 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76ae7268833c1c38ae6899b8be3d5fb0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76ae7268833c1c38ae6899b8be3d5fb0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:42:06.988721 kubelet[1672]: I0513 00:42:06.988652 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:06.988832 kubelet[1672]: I0513 00:42:06.988739 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:06.988832 kubelet[1672]: I0513 00:42:06.988807 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:42:07.215855 kubelet[1672]: E0513 00:42:07.215808 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:07.216396 env[1208]: time="2025-05-13T00:42:07.216328919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76ae7268833c1c38ae6899b8be3d5fb0,Namespace:kube-system,Attempt:0,}" May 13 00:42:07.224666 kubelet[1672]: E0513 00:42:07.224624 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:07.224945 kubelet[1672]: W0513 00:42:07.224873 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.224945 kubelet[1672]: E0513 00:42:07.224928 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.225160 env[1208]: time="2025-05-13T00:42:07.225123014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:42:07.228354 kubelet[1672]: E0513 00:42:07.228325 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:07.228609 env[1208]: time="2025-05-13T00:42:07.228556429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:42:07.395003 kubelet[1672]: W0513 00:42:07.394956 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.395130 kubelet[1672]: E0513 00:42:07.395009 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.513090 kubelet[1672]: W0513 00:42:07.512950 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.513090 kubelet[1672]: E0513 00:42:07.513016 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.588919 kubelet[1672]: E0513 00:42:07.588871 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="1.6s" May 13 00:42:07.654939 kubelet[1672]: W0513 00:42:07.654854 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.654939 kubelet[1672]: E0513 00:42:07.654910 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:07.692452 kubelet[1672]: I0513 00:42:07.692400 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:42:07.692796 kubelet[1672]: E0513 00:42:07.692766 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" May 13 00:42:08.034679 kubelet[1672]: E0513 00:42:08.034544 1672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef65d85ebae0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:42:06.167153376 +0000 UTC m=+0.330984787,LastTimestamp:2025-05-13 00:42:06.167153376 +0000 UTC m=+0.330984787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:42:08.177084 kubelet[1672]: E0513 00:42:08.177031 1672 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.59:6443: connect: connection refused May 13 00:42:08.413664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121022381.mount: Deactivated successfully. May 13 00:42:08.419431 env[1208]: time="2025-05-13T00:42:08.419370181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.421884 env[1208]: time="2025-05-13T00:42:08.421844030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.423174 env[1208]: time="2025-05-13T00:42:08.423140651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.425071 env[1208]: time="2025-05-13T00:42:08.425015028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.426965 env[1208]: time="2025-05-13T00:42:08.426936752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.428372 env[1208]: time="2025-05-13T00:42:08.428333934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.429465 env[1208]: time="2025-05-13T00:42:08.429432578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.430669 env[1208]: time="2025-05-13T00:42:08.430644708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.432619 env[1208]: time="2025-05-13T00:42:08.432597299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.433789 env[1208]: time="2025-05-13T00:42:08.433762211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.435189 env[1208]: time="2025-05-13T00:42:08.435155810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.436053 env[1208]: time="2025-05-13T00:42:08.436011250Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:08.453355 env[1208]: time="2025-05-13T00:42:08.453266598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:08.453355 env[1208]: time="2025-05-13T00:42:08.453309229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:08.453355 env[1208]: time="2025-05-13T00:42:08.453320413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:08.453573 env[1208]: time="2025-05-13T00:42:08.453470795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a59fc63490686a3d2d4d1af6b08eb5bcc0cfe3ab1501950808b4ee956de424ab pid=1714 runtime=io.containerd.runc.v2 May 13 00:42:08.465039 env[1208]: time="2025-05-13T00:42:08.464858247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:08.465039 env[1208]: time="2025-05-13T00:42:08.464900007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:08.465039 env[1208]: time="2025-05-13T00:42:08.464912172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:08.465281 env[1208]: time="2025-05-13T00:42:08.465068681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a16e119054668d0cb718c0f9a3c9778b0cc214e020af2f36c61ce8fc8bef014a pid=1737 runtime=io.containerd.runc.v2 May 13 00:42:08.467296 systemd[1]: Started cri-containerd-a59fc63490686a3d2d4d1af6b08eb5bcc0cfe3ab1501950808b4ee956de424ab.scope. May 13 00:42:08.473187 env[1208]: time="2025-05-13T00:42:08.468683894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:08.473187 env[1208]: time="2025-05-13T00:42:08.468741674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:08.473187 env[1208]: time="2025-05-13T00:42:08.468762899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:08.473187 env[1208]: time="2025-05-13T00:42:08.468923642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c8a88594eaf95a170d285d23c4ca2dbbc33e0ed78429c569aa2cbf7ece21c3df pid=1754 runtime=io.containerd.runc.v2 May 13 00:42:08.482189 systemd[1]: Started cri-containerd-a16e119054668d0cb718c0f9a3c9778b0cc214e020af2f36c61ce8fc8bef014a.scope. May 13 00:42:08.488048 systemd[1]: Started cri-containerd-c8a88594eaf95a170d285d23c4ca2dbbc33e0ed78429c569aa2cbf7ece21c3df.scope. May 13 00:42:08.510843 env[1208]: time="2025-05-13T00:42:08.510799219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59fc63490686a3d2d4d1af6b08eb5bcc0cfe3ab1501950808b4ee956de424ab\"" May 13 00:42:08.511552 kubelet[1672]: E0513 00:42:08.511525 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.513961 env[1208]: time="2025-05-13T00:42:08.513929198Z" level=info msg="CreateContainer within sandbox \"a59fc63490686a3d2d4d1af6b08eb5bcc0cfe3ab1501950808b4ee956de424ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:42:08.525473 env[1208]: time="2025-05-13T00:42:08.525421217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76ae7268833c1c38ae6899b8be3d5fb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8a88594eaf95a170d285d23c4ca2dbbc33e0ed78429c569aa2cbf7ece21c3df\"" May 13 00:42:08.526171 env[1208]: time="2025-05-13T00:42:08.526143116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"a16e119054668d0cb718c0f9a3c9778b0cc214e020af2f36c61ce8fc8bef014a\"" May 13 00:42:08.526553 kubelet[1672]: E0513 00:42:08.526368 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.527856 kubelet[1672]: E0513 00:42:08.527709 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.529590 env[1208]: time="2025-05-13T00:42:08.529551891Z" level=info msg="CreateContainer within sandbox \"c8a88594eaf95a170d285d23c4ca2dbbc33e0ed78429c569aa2cbf7ece21c3df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:42:08.529984 env[1208]: time="2025-05-13T00:42:08.529932229Z" level=info msg="CreateContainer within sandbox \"a16e119054668d0cb718c0f9a3c9778b0cc214e020af2f36c61ce8fc8bef014a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:42:08.536663 env[1208]: time="2025-05-13T00:42:08.536632497Z" level=info msg="CreateContainer within sandbox \"a59fc63490686a3d2d4d1af6b08eb5bcc0cfe3ab1501950808b4ee956de424ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"221f2034a4e4b84743937835154f5cd1ea8c6ed764e1c779d18f81c6d867808d\"" May 13 00:42:08.537120 env[1208]: time="2025-05-13T00:42:08.537097818Z" level=info msg="StartContainer for \"221f2034a4e4b84743937835154f5cd1ea8c6ed764e1c779d18f81c6d867808d\"" May 13 00:42:08.549182 env[1208]: time="2025-05-13T00:42:08.549123388Z" level=info msg="CreateContainer within sandbox \"c8a88594eaf95a170d285d23c4ca2dbbc33e0ed78429c569aa2cbf7ece21c3df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36abf7e847835dfe7cacc9ac93c51dc4629a7b69fc3921e6f59a7e76422fec6d\"" May 13 00:42:08.549937 env[1208]: time="2025-05-13T00:42:08.549855591Z" level=info msg="StartContainer for \"36abf7e847835dfe7cacc9ac93c51dc4629a7b69fc3921e6f59a7e76422fec6d\"" May 13 00:42:08.551292 systemd[1]: Started cri-containerd-221f2034a4e4b84743937835154f5cd1ea8c6ed764e1c779d18f81c6d867808d.scope. May 13 00:42:08.555774 env[1208]: time="2025-05-13T00:42:08.555706676Z" level=info msg="CreateContainer within sandbox \"a16e119054668d0cb718c0f9a3c9778b0cc214e020af2f36c61ce8fc8bef014a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e4f4bd61c52847bad8b8ab6bdcfefa94d90c5f038b8d3ab0780a848d5c7f3c3\"" May 13 00:42:08.556422 env[1208]: time="2025-05-13T00:42:08.556388157Z" level=info msg="StartContainer for \"1e4f4bd61c52847bad8b8ab6bdcfefa94d90c5f038b8d3ab0780a848d5c7f3c3\"" May 13 00:42:08.573254 systemd[1]: Started cri-containerd-1e4f4bd61c52847bad8b8ab6bdcfefa94d90c5f038b8d3ab0780a848d5c7f3c3.scope. May 13 00:42:08.579385 systemd[1]: Started cri-containerd-36abf7e847835dfe7cacc9ac93c51dc4629a7b69fc3921e6f59a7e76422fec6d.scope. May 13 00:42:08.603343 env[1208]: time="2025-05-13T00:42:08.603305755Z" level=info msg="StartContainer for \"221f2034a4e4b84743937835154f5cd1ea8c6ed764e1c779d18f81c6d867808d\" returns successfully" May 13 00:42:08.621270 env[1208]: time="2025-05-13T00:42:08.621221579Z" level=info msg="StartContainer for \"1e4f4bd61c52847bad8b8ab6bdcfefa94d90c5f038b8d3ab0780a848d5c7f3c3\" returns successfully" May 13 00:42:08.631050 env[1208]: time="2025-05-13T00:42:08.631011749Z" level=info msg="StartContainer for \"36abf7e847835dfe7cacc9ac93c51dc4629a7b69fc3921e6f59a7e76422fec6d\" returns successfully" May 13 00:42:09.205837 kubelet[1672]: E0513 00:42:09.205796 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:09.207196 kubelet[1672]: E0513 00:42:09.207168 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:09.208563 kubelet[1672]: E0513 00:42:09.208534 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:09.294201 kubelet[1672]: I0513 00:42:09.294161 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:42:09.542440 kubelet[1672]: E0513 00:42:09.542329 1672 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:42:09.621877 kubelet[1672]: I0513 00:42:09.621682 1672 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:42:10.163192 kubelet[1672]: I0513 00:42:10.163144 1672 apiserver.go:52] "Watching apiserver" May 13 00:42:10.185040 kubelet[1672]: I0513 00:42:10.185010 1672 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:42:10.214448 kubelet[1672]: E0513 00:42:10.214401 1672 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:42:10.214448 kubelet[1672]: E0513 00:42:10.214441 1672 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:42:10.214844 kubelet[1672]: E0513 00:42:10.214797 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:10.214880 kubelet[1672]: E0513 00:42:10.214848 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:12.264268 systemd[1]: Reloading. May 13 00:42:12.332486 /usr/lib/systemd/system-generators/torcx-generator[1971]: time="2025-05-13T00:42:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:42:12.332511 /usr/lib/systemd/system-generators/torcx-generator[1971]: time="2025-05-13T00:42:12Z" level=info msg="torcx already run" May 13 00:42:12.391378 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:42:12.391396 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:42:12.408433 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:42:12.499125 systemd[1]: Stopping kubelet.service... May 13 00:42:12.515048 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:42:12.515283 systemd[1]: Stopped kubelet.service. May 13 00:42:12.516904 systemd[1]: Starting kubelet.service... May 13 00:42:12.594359 systemd[1]: Started kubelet.service. May 13 00:42:12.633715 kubelet[2016]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:42:12.633715 kubelet[2016]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:42:12.633715 kubelet[2016]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:42:12.634060 kubelet[2016]: I0513 00:42:12.633755 2016 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:42:12.637849 kubelet[2016]: I0513 00:42:12.637805 2016 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:42:12.637849 kubelet[2016]: I0513 00:42:12.637831 2016 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:42:12.638037 kubelet[2016]: I0513 00:42:12.638016 2016 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:42:12.639208 kubelet[2016]: I0513 00:42:12.639186 2016 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:42:12.640215 kubelet[2016]: I0513 00:42:12.640151 2016 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:42:12.647336 kubelet[2016]: I0513 00:42:12.647303 2016 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:42:12.647605 kubelet[2016]: I0513 00:42:12.647550 2016 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:42:12.647779 kubelet[2016]: I0513 00:42:12.647594 2016 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:42:12.647862 kubelet[2016]: I0513 00:42:12.647783 2016 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:42:12.647862 kubelet[2016]: I0513 00:42:12.647793 2016 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:42:12.647862 kubelet[2016]: I0513 00:42:12.647826 2016 state_mem.go:36] "Initialized new in-memory state store" May 13 00:42:12.647936 kubelet[2016]: I0513 00:42:12.647922 2016 kubelet.go:400] "Attempting to sync node with API server" May 13 00:42:12.647936 kubelet[2016]: I0513 00:42:12.647933 2016 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:42:12.647980 kubelet[2016]: I0513 00:42:12.647951 2016 kubelet.go:312] "Adding apiserver pod source" May 13 00:42:12.647980 kubelet[2016]: I0513 00:42:12.647968 2016 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:42:12.648761 kubelet[2016]: I0513 00:42:12.648718 2016 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:42:12.651099 kubelet[2016]: I0513 00:42:12.649272 2016 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:42:12.651099 kubelet[2016]: I0513 00:42:12.649772 2016 server.go:1264] "Started kubelet" May 13 00:42:12.653038 kubelet[2016]: I0513 00:42:12.651805 2016 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:42:12.654838 kubelet[2016]: I0513 00:42:12.654797 2016 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:42:12.656969 kubelet[2016]: I0513 00:42:12.656944 2016 server.go:455] "Adding debug handlers to kubelet server" May 13 00:42:12.657716 kubelet[2016]: I0513 00:42:12.657668 2016 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:42:12.658259 kubelet[2016]: I0513 00:42:12.658246 2016 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:42:12.660331 kubelet[2016]: I0513 00:42:12.660308 2016 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:42:12.660624 kubelet[2016]: I0513 00:42:12.660593 2016 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:42:12.660884 kubelet[2016]: I0513 00:42:12.660865 2016 reconciler.go:26] "Reconciler: start to sync state" May 13 00:42:12.661524 kubelet[2016]: I0513 00:42:12.661497 2016 factory.go:221] Registration of the systemd container factory successfully May 13 00:42:12.661648 kubelet[2016]: I0513 00:42:12.661620 2016 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:42:12.662989 kubelet[2016]: E0513 00:42:12.662966 2016 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:42:12.663691 kubelet[2016]: I0513 00:42:12.663653 2016 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:42:12.664214 kubelet[2016]: I0513 00:42:12.664188 2016 factory.go:221] Registration of the containerd container factory successfully May 13 00:42:12.664560 kubelet[2016]: I0513 00:42:12.664547 2016 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:42:12.664663 kubelet[2016]: I0513 00:42:12.664649 2016 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:42:12.664784 kubelet[2016]: I0513 00:42:12.664770 2016 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:42:12.664982 kubelet[2016]: E0513 00:42:12.664940 2016 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:42:12.689066 kubelet[2016]: I0513 00:42:12.689038 2016 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:42:12.689066 kubelet[2016]: I0513 00:42:12.689055 2016 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:42:12.689066 kubelet[2016]: I0513 00:42:12.689072 2016 state_mem.go:36] "Initialized new in-memory state store" May 13 00:42:12.689249 kubelet[2016]: I0513 00:42:12.689215 2016 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:42:12.689249 kubelet[2016]: I0513 00:42:12.689223 2016 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:42:12.689249 kubelet[2016]: I0513 00:42:12.689239 2016 policy_none.go:49] "None policy: Start" May 13 00:42:12.689778 kubelet[2016]: I0513 00:42:12.689735 2016 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:42:12.689856 kubelet[2016]: I0513 00:42:12.689788 2016 state_mem.go:35] "Initializing new in-memory state store" May 13 00:42:12.690007 kubelet[2016]: I0513 00:42:12.689978 2016 state_mem.go:75] "Updated machine memory state" May 13 00:42:12.693328 kubelet[2016]: I0513 00:42:12.693294 2016 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:42:12.693450 kubelet[2016]: I0513 00:42:12.693420 2016 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:42:12.693531 kubelet[2016]: I0513 00:42:12.693507 2016 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:42:12.764518 kubelet[2016]: I0513 00:42:12.764485 2016 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:42:12.765618 kubelet[2016]: I0513 00:42:12.765507 2016 topology_manager.go:215] "Topology Admit Handler" podUID="76ae7268833c1c38ae6899b8be3d5fb0" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:42:12.765672 kubelet[2016]: I0513 00:42:12.765631 2016 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:42:12.765724 kubelet[2016]: I0513 00:42:12.765710 2016 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:42:12.862561 kubelet[2016]: I0513 00:42:12.862504 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:12.862766 kubelet[2016]: I0513 00:42:12.862596 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:12.862766 kubelet[2016]: I0513 00:42:12.862625 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76ae7268833c1c38ae6899b8be3d5fb0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ae7268833c1c38ae6899b8be3d5fb0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:42:12.862766 kubelet[2016]: I0513 00:42:12.862643 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76ae7268833c1c38ae6899b8be3d5fb0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76ae7268833c1c38ae6899b8be3d5fb0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:42:12.862766 kubelet[2016]: I0513 00:42:12.862660 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:12.862766 kubelet[2016]: I0513 00:42:12.862683 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:12.862894 kubelet[2016]: I0513 00:42:12.862698 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:42:12.862894 kubelet[2016]: I0513 00:42:12.862755 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:42:12.862894 kubelet[2016]: I0513 00:42:12.862794 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76ae7268833c1c38ae6899b8be3d5fb0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76ae7268833c1c38ae6899b8be3d5fb0\") " pod="kube-system/kube-apiserver-localhost" May 13 00:42:13.209608 kubelet[2016]: E0513 00:42:13.209545 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:13.209852 kubelet[2016]: E0513 00:42:13.209639 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:13.209934 kubelet[2016]: E0513 00:42:13.209892 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:13.385794 kubelet[2016]: I0513 00:42:13.385728 2016 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:42:13.385981 kubelet[2016]: I0513 00:42:13.385835 2016 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:42:13.648970 kubelet[2016]: I0513 00:42:13.648863 2016 apiserver.go:52] "Watching apiserver" May 13 00:42:13.660852 kubelet[2016]: I0513 00:42:13.660816 2016 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:42:13.671010 kubelet[2016]: E0513 00:42:13.670982 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:13.671010 kubelet[2016]: E0513 00:42:13.671028 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:13.671197 kubelet[2016]: E0513 00:42:13.671078 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:14.425836 kubelet[2016]: I0513 00:42:14.425761 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.425725654 podStartE2EDuration="2.425725654s" podCreationTimestamp="2025-05-13 00:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:14.212423829 +0000 UTC m=+1.614344519" watchObservedRunningTime="2025-05-13 00:42:14.425725654 +0000 UTC m=+1.827646354" May 13 00:42:14.558269 kubelet[2016]: I0513 00:42:14.558200 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.558179289 podStartE2EDuration="2.558179289s" podCreationTimestamp="2025-05-13 00:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:14.426422125 +0000 UTC m=+1.828342835" watchObservedRunningTime="2025-05-13 00:42:14.558179289 +0000 UTC m=+1.960099979" May 13 00:42:14.572240 kubelet[2016]: I0513 00:42:14.572165 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.572142928 podStartE2EDuration="2.572142928s" podCreationTimestamp="2025-05-13 00:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:14.558773078 +0000 UTC m=+1.960693789" watchObservedRunningTime="2025-05-13 00:42:14.572142928 +0000 UTC m=+1.974063628" May 13 00:42:14.573630 sudo[2050]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:42:14.573917 sudo[2050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:42:14.672390 kubelet[2016]: E0513 00:42:14.672341 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:14.673107 kubelet[2016]: E0513 00:42:14.673079 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:15.033555 sudo[2050]: pam_unix(sudo:session): session closed for user root May 13 00:42:16.337228 sudo[1303]: pam_unix(sudo:session): session closed for user root May 13 00:42:16.338360 sshd[1300]: pam_unix(sshd:session): session closed for user core May 13 00:42:16.340969 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:34270.service: Deactivated successfully. May 13 00:42:16.341601 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:42:16.341731 systemd[1]: session-5.scope: Consumed 4.218s CPU time. May 13 00:42:16.342277 systemd-logind[1190]: Session 5 logged out. Waiting for processes to exit. May 13 00:42:16.343105 systemd-logind[1190]: Removed session 5. May 13 00:42:18.805552 kubelet[2016]: E0513 00:42:18.805515 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:19.679025 kubelet[2016]: E0513 00:42:19.678991 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:20.680766 kubelet[2016]: E0513 00:42:20.680719 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.027247 update_engine[1192]: I0513 00:42:22.027199 1192 update_attempter.cc:509] Updating boot flags... May 13 00:42:22.173240 kubelet[2016]: E0513 00:42:22.173186 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.682647 kubelet[2016]: E0513 00:42:22.682614 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:24.179489 kubelet[2016]: E0513 00:42:24.179455 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.584354 kubelet[2016]: I0513 00:42:28.584291 2016 topology_manager.go:215] "Topology Admit Handler" podUID="4f954875-2f87-48d5-9056-6947608e7dc1" podNamespace="kube-system" podName="kube-proxy-98shm" May 13 00:42:28.586204 kubelet[2016]: I0513 00:42:28.586165 2016 topology_manager.go:215] "Topology Admit Handler" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" podNamespace="kube-system" podName="cilium-7kf2j" May 13 00:42:28.591067 systemd[1]: Created slice kubepods-besteffort-pod4f954875_2f87_48d5_9056_6947608e7dc1.slice. May 13 00:42:28.602134 systemd[1]: Created slice kubepods-burstable-pod8ff35182_bd26_4292_af67_3cfa5d3cc38c.slice. May 13 00:42:28.647690 kubelet[2016]: I0513 00:42:28.647663 2016 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:42:28.648276 env[1208]: time="2025-05-13T00:42:28.648191425Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:42:28.648652 kubelet[2016]: I0513 00:42:28.648638 2016 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:42:28.655452 kubelet[2016]: I0513 00:42:28.655393 2016 topology_manager.go:215] "Topology Admit Handler" podUID="d4410601-a63d-4e6a-8b6a-26cd9cc51ca7" podNamespace="kube-system" podName="cilium-operator-599987898-gdg8k" May 13 00:42:28.661141 systemd[1]: Created slice kubepods-besteffort-podd4410601_a63d_4e6a_8b6a_26cd9cc51ca7.slice. May 13 00:42:28.673337 kubelet[2016]: I0513 00:42:28.673312 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f954875-2f87-48d5-9056-6947608e7dc1-xtables-lock\") pod \"kube-proxy-98shm\" (UID: \"4f954875-2f87-48d5-9056-6947608e7dc1\") " pod="kube-system/kube-proxy-98shm" May 13 00:42:28.673477 kubelet[2016]: I0513 00:42:28.673458 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ff35182-bd26-4292-af67-3cfa5d3cc38c-clustermesh-secrets\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.673613 kubelet[2016]: I0513 00:42:28.673597 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-net\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.673733 kubelet[2016]: I0513 00:42:28.673714 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzg6w\" (UniqueName: \"kubernetes.io/projected/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-kube-api-access-dzg6w\") pod \"cilium-operator-599987898-gdg8k\" (UID: \"d4410601-a63d-4e6a-8b6a-26cd9cc51ca7\") " pod="kube-system/cilium-operator-599987898-gdg8k" May 13 00:42:28.673841 kubelet[2016]: I0513 00:42:28.673825 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btj5t\" (UniqueName: \"kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-kube-api-access-btj5t\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.673956 kubelet[2016]: I0513 00:42:28.673941 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-cilium-config-path\") pod \"cilium-operator-599987898-gdg8k\" (UID: \"d4410601-a63d-4e6a-8b6a-26cd9cc51ca7\") " pod="kube-system/cilium-operator-599987898-gdg8k" May 13 00:42:28.674080 kubelet[2016]: I0513 00:42:28.674056 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f954875-2f87-48d5-9056-6947608e7dc1-lib-modules\") pod \"kube-proxy-98shm\" (UID: \"4f954875-2f87-48d5-9056-6947608e7dc1\") " pod="kube-system/kube-proxy-98shm" May 13 00:42:28.674201 kubelet[2016]: I0513 00:42:28.674185 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-lib-modules\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.674312 kubelet[2016]: I0513 00:42:28.674296 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-xtables-lock\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.674416 kubelet[2016]: I0513 00:42:28.674400 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-kernel\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.674523 kubelet[2016]: I0513 00:42:28.674508 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f954875-2f87-48d5-9056-6947608e7dc1-kube-proxy\") pod \"kube-proxy-98shm\" (UID: \"4f954875-2f87-48d5-9056-6947608e7dc1\") " pod="kube-system/kube-proxy-98shm" May 13 00:42:28.674646 kubelet[2016]: I0513 00:42:28.674630 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxr5\" (UniqueName: \"kubernetes.io/projected/4f954875-2f87-48d5-9056-6947608e7dc1-kube-api-access-jjxr5\") pod \"kube-proxy-98shm\" (UID: \"4f954875-2f87-48d5-9056-6947608e7dc1\") " pod="kube-system/kube-proxy-98shm" May 13 00:42:28.674757 kubelet[2016]: I0513 00:42:28.674740 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-etc-cni-netd\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.674867 kubelet[2016]: I0513 00:42:28.674851 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-config-path\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.674975 kubelet[2016]: I0513 00:42:28.674960 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hubble-tls\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.675086 kubelet[2016]: I0513 00:42:28.675070 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-run\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.675200 kubelet[2016]: I0513 00:42:28.675184 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-bpf-maps\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.675309 kubelet[2016]: I0513 00:42:28.675293 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-cgroup\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.675411 kubelet[2016]: I0513 00:42:28.675395 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cni-path\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.675527 kubelet[2016]: I0513 00:42:28.675502 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hostproc\") pod \"cilium-7kf2j\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " pod="kube-system/cilium-7kf2j" May 13 00:42:28.901605 kubelet[2016]: E0513 00:42:28.901438 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.902527 env[1208]: time="2025-05-13T00:42:28.902075996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98shm,Uid:4f954875-2f87-48d5-9056-6947608e7dc1,Namespace:kube-system,Attempt:0,}" May 13 00:42:28.905277 kubelet[2016]: E0513 00:42:28.905126 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.905632 env[1208]: time="2025-05-13T00:42:28.905571451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kf2j,Uid:8ff35182-bd26-4292-af67-3cfa5d3cc38c,Namespace:kube-system,Attempt:0,}" May 13 00:42:28.930306 env[1208]: time="2025-05-13T00:42:28.930226437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:28.930306 env[1208]: time="2025-05-13T00:42:28.930272663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:28.930470 env[1208]: time="2025-05-13T00:42:28.930285217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:28.930470 env[1208]: time="2025-05-13T00:42:28.930417234Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd45aedc1f9d90e0abbce7949fc405128f6b8b5b9dff209119050a80368456e pid=2125 runtime=io.containerd.runc.v2 May 13 00:42:28.933160 env[1208]: time="2025-05-13T00:42:28.933090931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:28.933218 env[1208]: time="2025-05-13T00:42:28.933169017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:28.933218 env[1208]: time="2025-05-13T00:42:28.933191559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:28.933369 env[1208]: time="2025-05-13T00:42:28.933326922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81 pid=2140 runtime=io.containerd.runc.v2 May 13 00:42:28.945572 systemd[1]: Started cri-containerd-bdd45aedc1f9d90e0abbce7949fc405128f6b8b5b9dff209119050a80368456e.scope. May 13 00:42:28.951287 systemd[1]: Started cri-containerd-20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81.scope. May 13 00:42:28.964994 kubelet[2016]: E0513 00:42:28.964953 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.966620 env[1208]: time="2025-05-13T00:42:28.966559108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gdg8k,Uid:d4410601-a63d-4e6a-8b6a-26cd9cc51ca7,Namespace:kube-system,Attempt:0,}" May 13 00:42:28.978534 env[1208]: time="2025-05-13T00:42:28.978428304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98shm,Uid:4f954875-2f87-48d5-9056-6947608e7dc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdd45aedc1f9d90e0abbce7949fc405128f6b8b5b9dff209119050a80368456e\"" May 13 00:42:28.979278 kubelet[2016]: E0513 00:42:28.979166 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.981350 env[1208]: time="2025-05-13T00:42:28.980966706Z" level=info msg="CreateContainer within sandbox \"bdd45aedc1f9d90e0abbce7949fc405128f6b8b5b9dff209119050a80368456e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:42:28.983648 env[1208]: time="2025-05-13T00:42:28.981919661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7kf2j,Uid:8ff35182-bd26-4292-af67-3cfa5d3cc38c,Namespace:kube-system,Attempt:0,} returns sandbox id \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\"" May 13 00:42:28.983648 env[1208]: time="2025-05-13T00:42:28.983064425Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:42:28.983830 kubelet[2016]: E0513 00:42:28.982270 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.989976 env[1208]: time="2025-05-13T00:42:28.989147255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:28.989976 env[1208]: time="2025-05-13T00:42:28.989196667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:28.989976 env[1208]: time="2025-05-13T00:42:28.989206285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:28.989976 env[1208]: time="2025-05-13T00:42:28.989401951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83 pid=2203 runtime=io.containerd.runc.v2 May 13 00:42:29.002211 env[1208]: time="2025-05-13T00:42:29.002158889Z" level=info msg="CreateContainer within sandbox \"bdd45aedc1f9d90e0abbce7949fc405128f6b8b5b9dff209119050a80368456e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4bcc710c2ea528737086b527f4585ae05c0c99aff3ec744f839186dbb7d74a9\"" May 13 00:42:29.002986 env[1208]: time="2025-05-13T00:42:29.002955780Z" level=info msg="StartContainer for \"c4bcc710c2ea528737086b527f4585ae05c0c99aff3ec744f839186dbb7d74a9\"" May 13 00:42:29.005031 systemd[1]: Started cri-containerd-671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83.scope. May 13 00:42:29.019529 systemd[1]: Started cri-containerd-c4bcc710c2ea528737086b527f4585ae05c0c99aff3ec744f839186dbb7d74a9.scope. May 13 00:42:29.043964 env[1208]: time="2025-05-13T00:42:29.043916524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gdg8k,Uid:d4410601-a63d-4e6a-8b6a-26cd9cc51ca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83\"" May 13 00:42:29.045120 kubelet[2016]: E0513 00:42:29.044652 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:29.052850 env[1208]: time="2025-05-13T00:42:29.052801743Z" level=info msg="StartContainer for \"c4bcc710c2ea528737086b527f4585ae05c0c99aff3ec744f839186dbb7d74a9\" returns successfully" May 13 00:42:29.697348 kubelet[2016]: E0513 00:42:29.697303 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:34.694140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781605127.mount: Deactivated successfully. May 13 00:42:38.095379 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:46866.service. May 13 00:42:38.156121 sshd[2398]: Accepted publickey for core from 10.0.0.1 port 46866 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:38.157331 sshd[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:38.161949 systemd[1]: Started session-6.scope. May 13 00:42:38.162315 systemd-logind[1190]: New session 6 of user core. May 13 00:42:38.276638 sshd[2398]: pam_unix(sshd:session): session closed for user core May 13 00:42:38.279113 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:46866.service: Deactivated successfully. May 13 00:42:38.279763 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:42:38.280361 systemd-logind[1190]: Session 6 logged out. Waiting for processes to exit. May 13 00:42:38.281239 systemd-logind[1190]: Removed session 6. May 13 00:42:39.613512 env[1208]: time="2025-05-13T00:42:39.613451114Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.615500 env[1208]: time="2025-05-13T00:42:39.615471820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.617272 env[1208]: time="2025-05-13T00:42:39.617235916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.617815 env[1208]: time="2025-05-13T00:42:39.617785334Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:42:39.619086 env[1208]: time="2025-05-13T00:42:39.619022503Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:42:39.620087 env[1208]: time="2025-05-13T00:42:39.620054056Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:42:39.633117 env[1208]: time="2025-05-13T00:42:39.633056738Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc\"" May 13 00:42:39.633779 env[1208]: time="2025-05-13T00:42:39.633731121Z" level=info msg="StartContainer for \"3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc\"" May 13 00:42:39.650420 systemd[1]: Started cri-containerd-3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc.scope. May 13 00:42:39.686015 systemd[1]: cri-containerd-3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc.scope: Deactivated successfully. May 13 00:42:39.744541 env[1208]: time="2025-05-13T00:42:39.744475648Z" level=info msg="StartContainer for \"3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc\" returns successfully" May 13 00:42:39.933404 kubelet[2016]: E0513 00:42:39.933267 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:40.629916 systemd[1]: run-containerd-runc-k8s.io-3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc-runc.66uC29.mount: Deactivated successfully. May 13 00:42:40.629992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc-rootfs.mount: Deactivated successfully. May 13 00:42:40.819049 kubelet[2016]: I0513 00:42:40.818993 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98shm" podStartSLOduration=12.818978647 podStartE2EDuration="12.818978647s" podCreationTimestamp="2025-05-13 00:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:29.705495377 +0000 UTC m=+17.107416077" watchObservedRunningTime="2025-05-13 00:42:40.818978647 +0000 UTC m=+28.220899347" May 13 00:42:40.833269 env[1208]: time="2025-05-13T00:42:40.833202289Z" level=info msg="shim disconnected" id=3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc May 13 00:42:40.833269 env[1208]: time="2025-05-13T00:42:40.833261881Z" level=warning msg="cleaning up after shim disconnected" id=3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc namespace=k8s.io May 13 00:42:40.833269 env[1208]: time="2025-05-13T00:42:40.833272250Z" level=info msg="cleaning up dead shim" May 13 00:42:40.840255 env[1208]: time="2025-05-13T00:42:40.840187137Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2463 runtime=io.containerd.runc.v2\n" May 13 00:42:40.971387 kubelet[2016]: E0513 00:42:40.971348 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:40.973357 env[1208]: time="2025-05-13T00:42:40.973321032Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:42:41.340705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013803638.mount: Deactivated successfully. May 13 00:42:41.645151 env[1208]: time="2025-05-13T00:42:41.644877904Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4\"" May 13 00:42:41.645329 env[1208]: time="2025-05-13T00:42:41.645301798Z" level=info msg="StartContainer for \"6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4\"" May 13 00:42:41.663161 systemd[1]: run-containerd-runc-k8s.io-6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4-runc.TfUO5d.mount: Deactivated successfully. May 13 00:42:41.666240 systemd[1]: Started cri-containerd-6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4.scope. May 13 00:42:41.688861 env[1208]: time="2025-05-13T00:42:41.688805385Z" level=info msg="StartContainer for \"6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4\" returns successfully" May 13 00:42:41.695289 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:42:41.695479 systemd[1]: Stopped systemd-sysctl.service. May 13 00:42:41.697113 systemd[1]: Stopping systemd-sysctl.service... May 13 00:42:41.698411 systemd[1]: Starting systemd-sysctl.service... May 13 00:42:41.700402 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:42:41.701151 systemd[1]: cri-containerd-6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4.scope: Deactivated successfully. May 13 00:42:41.706245 systemd[1]: Finished systemd-sysctl.service. May 13 00:42:41.720292 env[1208]: time="2025-05-13T00:42:41.720239476Z" level=info msg="shim disconnected" id=6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4 May 13 00:42:41.720292 env[1208]: time="2025-05-13T00:42:41.720287606Z" level=warning msg="cleaning up after shim disconnected" id=6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4 namespace=k8s.io May 13 00:42:41.720292 env[1208]: time="2025-05-13T00:42:41.720296683Z" level=info msg="cleaning up dead shim" May 13 00:42:41.727189 env[1208]: time="2025-05-13T00:42:41.727124697Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2528 runtime=io.containerd.runc.v2\n" May 13 00:42:41.975366 kubelet[2016]: E0513 00:42:41.975115 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:41.976912 env[1208]: time="2025-05-13T00:42:41.976846813Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:42:41.995459 env[1208]: time="2025-05-13T00:42:41.995392865Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682\"" May 13 00:42:41.995914 env[1208]: time="2025-05-13T00:42:41.995870500Z" level=info msg="StartContainer for \"7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682\"" May 13 00:42:42.010525 systemd[1]: Started cri-containerd-7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682.scope. May 13 00:42:42.034067 env[1208]: time="2025-05-13T00:42:42.033939836Z" level=info msg="StartContainer for \"7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682\" returns successfully" May 13 00:42:42.034632 systemd[1]: cri-containerd-7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682.scope: Deactivated successfully. May 13 00:42:42.059167 env[1208]: time="2025-05-13T00:42:42.059087729Z" level=info msg="shim disconnected" id=7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682 May 13 00:42:42.059167 env[1208]: time="2025-05-13T00:42:42.059154194Z" level=warning msg="cleaning up after shim disconnected" id=7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682 namespace=k8s.io May 13 00:42:42.059167 env[1208]: time="2025-05-13T00:42:42.059167188Z" level=info msg="cleaning up dead shim" May 13 00:42:42.065816 env[1208]: time="2025-05-13T00:42:42.065752538Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2585 runtime=io.containerd.runc.v2\n" May 13 00:42:42.653902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4-rootfs.mount: Deactivated successfully. May 13 00:42:42.979011 kubelet[2016]: E0513 00:42:42.978182 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:42.986041 env[1208]: time="2025-05-13T00:42:42.985989002Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:42:43.279917 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:46882.service. May 13 00:42:43.326952 sshd[2599]: Accepted publickey for core from 10.0.0.1 port 46882 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:43.328796 sshd[2599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:43.333413 systemd-logind[1190]: New session 7 of user core. May 13 00:42:43.334394 systemd[1]: Started session-7.scope. May 13 00:42:43.441664 sshd[2599]: pam_unix(sshd:session): session closed for user core May 13 00:42:43.445053 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:46882.service: Deactivated successfully. May 13 00:42:43.445936 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:42:43.446555 systemd-logind[1190]: Session 7 logged out. Waiting for processes to exit. May 13 00:42:43.447545 systemd-logind[1190]: Removed session 7. May 13 00:42:43.581517 env[1208]: time="2025-05-13T00:42:43.581386351Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918\"" May 13 00:42:43.582771 env[1208]: time="2025-05-13T00:42:43.582719159Z" level=info msg="StartContainer for \"3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918\"" May 13 00:42:43.599125 env[1208]: time="2025-05-13T00:42:43.599080849Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:43.602061 systemd[1]: Started cri-containerd-3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918.scope. May 13 00:42:43.602604 env[1208]: time="2025-05-13T00:42:43.602078927Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:43.604764 env[1208]: time="2025-05-13T00:42:43.604734794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:43.605229 env[1208]: time="2025-05-13T00:42:43.605202280Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:42:43.610495 env[1208]: time="2025-05-13T00:42:43.610456325Z" level=info msg="CreateContainer within sandbox \"671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:42:43.627088 systemd[1]: cri-containerd-3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918.scope: Deactivated successfully. May 13 00:42:43.628454 env[1208]: time="2025-05-13T00:42:43.628401223Z" level=info msg="StartContainer for \"3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918\" returns successfully" May 13 00:42:43.629702 env[1208]: time="2025-05-13T00:42:43.629658589Z" level=info msg="CreateContainer within sandbox \"671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\"" May 13 00:42:43.630191 env[1208]: time="2025-05-13T00:42:43.630150561Z" level=info msg="StartContainer for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\"" May 13 00:42:43.647172 systemd[1]: Started cri-containerd-129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5.scope. May 13 00:42:43.654657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918-rootfs.mount: Deactivated successfully. May 13 00:42:43.662393 env[1208]: time="2025-05-13T00:42:43.662210429Z" level=info msg="shim disconnected" id=3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918 May 13 00:42:43.662393 env[1208]: time="2025-05-13T00:42:43.662390666Z" level=warning msg="cleaning up after shim disconnected" id=3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918 namespace=k8s.io May 13 00:42:43.662509 env[1208]: time="2025-05-13T00:42:43.662400434Z" level=info msg="cleaning up dead shim" May 13 00:42:43.669535 env[1208]: time="2025-05-13T00:42:43.669501731Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2678 runtime=io.containerd.runc.v2\n" May 13 00:42:43.677289 env[1208]: time="2025-05-13T00:42:43.677229863Z" level=info msg="StartContainer for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" returns successfully" May 13 00:42:43.982886 kubelet[2016]: E0513 00:42:43.982847 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:43.986900 kubelet[2016]: E0513 00:42:43.986867 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:43.990779 env[1208]: time="2025-05-13T00:42:43.990742397Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:42:44.010142 kubelet[2016]: I0513 00:42:44.010079 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gdg8k" podStartSLOduration=1.449016873 podStartE2EDuration="16.010059817s" podCreationTimestamp="2025-05-13 00:42:28 +0000 UTC" firstStartedPulling="2025-05-13 00:42:29.046077581 +0000 UTC m=+16.447998281" lastFinishedPulling="2025-05-13 00:42:43.607120524 +0000 UTC m=+31.009041225" observedRunningTime="2025-05-13 00:42:43.992796607 +0000 UTC m=+31.394717327" watchObservedRunningTime="2025-05-13 00:42:44.010059817 +0000 UTC m=+31.411980547" May 13 00:42:44.017922 env[1208]: time="2025-05-13T00:42:44.017853702Z" level=info msg="CreateContainer within sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\"" May 13 00:42:44.018631 env[1208]: time="2025-05-13T00:42:44.018596654Z" level=info msg="StartContainer for \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\"" May 13 00:42:44.057702 systemd[1]: Started cri-containerd-678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8.scope. May 13 00:42:44.102307 env[1208]: time="2025-05-13T00:42:44.102248845Z" level=info msg="StartContainer for \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\" returns successfully" May 13 00:42:44.287502 kubelet[2016]: I0513 00:42:44.286502 2016 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:42:44.310959 kubelet[2016]: I0513 00:42:44.310895 2016 topology_manager.go:215] "Topology Admit Handler" podUID="4a9335c3-bdf8-4677-ac55-8993a20b2c96" podNamespace="kube-system" podName="coredns-7db6d8ff4d-htdjb" May 13 00:42:44.313990 kubelet[2016]: I0513 00:42:44.313628 2016 topology_manager.go:215] "Topology Admit Handler" podUID="23ce8b23-19bd-490d-8794-9868b23d2613" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p75jt" May 13 00:42:44.318179 systemd[1]: Created slice kubepods-burstable-pod4a9335c3_bdf8_4677_ac55_8993a20b2c96.slice. May 13 00:42:44.323736 systemd[1]: Created slice kubepods-burstable-pod23ce8b23_19bd_490d_8794_9868b23d2613.slice. May 13 00:42:44.440237 kubelet[2016]: I0513 00:42:44.440190 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a9335c3-bdf8-4677-ac55-8993a20b2c96-config-volume\") pod \"coredns-7db6d8ff4d-htdjb\" (UID: \"4a9335c3-bdf8-4677-ac55-8993a20b2c96\") " pod="kube-system/coredns-7db6d8ff4d-htdjb" May 13 00:42:44.440508 kubelet[2016]: I0513 00:42:44.440487 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23ce8b23-19bd-490d-8794-9868b23d2613-config-volume\") pod \"coredns-7db6d8ff4d-p75jt\" (UID: \"23ce8b23-19bd-490d-8794-9868b23d2613\") " pod="kube-system/coredns-7db6d8ff4d-p75jt" May 13 00:42:44.440682 kubelet[2016]: I0513 00:42:44.440657 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkzn\" (UniqueName: \"kubernetes.io/projected/4a9335c3-bdf8-4677-ac55-8993a20b2c96-kube-api-access-kdkzn\") pod \"coredns-7db6d8ff4d-htdjb\" (UID: \"4a9335c3-bdf8-4677-ac55-8993a20b2c96\") " pod="kube-system/coredns-7db6d8ff4d-htdjb" May 13 00:42:44.440830 kubelet[2016]: I0513 00:42:44.440808 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gk9z\" (UniqueName: \"kubernetes.io/projected/23ce8b23-19bd-490d-8794-9868b23d2613-kube-api-access-8gk9z\") pod \"coredns-7db6d8ff4d-p75jt\" (UID: \"23ce8b23-19bd-490d-8794-9868b23d2613\") " pod="kube-system/coredns-7db6d8ff4d-p75jt" May 13 00:42:44.622753 kubelet[2016]: E0513 00:42:44.622596 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:44.623781 env[1208]: time="2025-05-13T00:42:44.623726635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-htdjb,Uid:4a9335c3-bdf8-4677-ac55-8993a20b2c96,Namespace:kube-system,Attempt:0,}" May 13 00:42:44.626974 kubelet[2016]: E0513 00:42:44.626946 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:44.627290 env[1208]: time="2025-05-13T00:42:44.627265056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p75jt,Uid:23ce8b23-19bd-490d-8794-9868b23d2613,Namespace:kube-system,Attempt:0,}" May 13 00:42:44.990434 kubelet[2016]: E0513 00:42:44.990403 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:44.990804 kubelet[2016]: E0513 00:42:44.990689 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:45.992630 kubelet[2016]: E0513 00:42:45.992573 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:46.989811 systemd-networkd[1026]: cilium_host: Link UP May 13 00:42:46.989934 systemd-networkd[1026]: cilium_net: Link UP May 13 00:42:46.991414 systemd-networkd[1026]: cilium_net: Gained carrier May 13 00:42:46.992638 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:42:46.992773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:42:46.992776 systemd-networkd[1026]: cilium_host: Gained carrier May 13 00:42:46.994996 kubelet[2016]: E0513 00:42:46.994969 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:47.074423 systemd-networkd[1026]: cilium_vxlan: Link UP May 13 00:42:47.074435 systemd-networkd[1026]: cilium_vxlan: Gained carrier May 13 00:42:47.275617 kernel: NET: Registered PF_ALG protocol family May 13 00:42:47.492708 systemd-networkd[1026]: cilium_host: Gained IPv6LL May 13 00:42:47.790370 systemd-networkd[1026]: lxc_health: Link UP May 13 00:42:47.802758 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:42:47.802507 systemd-networkd[1026]: lxc_health: Gained carrier May 13 00:42:47.956723 systemd-networkd[1026]: cilium_net: Gained IPv6LL May 13 00:42:48.201839 systemd-networkd[1026]: lxc5312482bb179: Link UP May 13 00:42:48.210608 kernel: eth0: renamed from tmp24d62 May 13 00:42:48.220259 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:42:48.220342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5312482bb179: link becomes ready May 13 00:42:48.220462 systemd-networkd[1026]: lxc5312482bb179: Gained carrier May 13 00:42:48.220733 systemd-networkd[1026]: lxc764b5073bc85: Link UP May 13 00:42:48.231841 kernel: eth0: renamed from tmpec2b0 May 13 00:42:48.238632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc764b5073bc85: link becomes ready May 13 00:42:48.238777 systemd-networkd[1026]: lxc764b5073bc85: Gained carrier May 13 00:42:48.447039 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:42334.service. May 13 00:42:48.487376 sshd[3243]: Accepted publickey for core from 10.0.0.1 port 42334 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:48.488280 sshd[3243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:48.492124 systemd-logind[1190]: New session 8 of user core. May 13 00:42:48.492540 systemd[1]: Started session-8.scope. May 13 00:42:48.629967 sshd[3243]: pam_unix(sshd:session): session closed for user core May 13 00:42:48.632001 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:42334.service: Deactivated successfully. May 13 00:42:48.632777 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:42:48.633333 systemd-logind[1190]: Session 8 logged out. Waiting for processes to exit. May 13 00:42:48.634010 systemd-logind[1190]: Removed session 8. May 13 00:42:48.909559 kubelet[2016]: E0513 00:42:48.909436 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:48.922247 kubelet[2016]: I0513 00:42:48.922205 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7kf2j" podStartSLOduration=10.285987171 podStartE2EDuration="20.922185559s" podCreationTimestamp="2025-05-13 00:42:28 +0000 UTC" firstStartedPulling="2025-05-13 00:42:28.982614582 +0000 UTC m=+16.384535282" lastFinishedPulling="2025-05-13 00:42:39.61881297 +0000 UTC m=+27.020733670" observedRunningTime="2025-05-13 00:42:45.565014583 +0000 UTC m=+32.966935274" watchObservedRunningTime="2025-05-13 00:42:48.922185559 +0000 UTC m=+36.324106259" May 13 00:42:49.047203 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL May 13 00:42:49.047487 systemd-networkd[1026]: lxc_health: Gained IPv6LL May 13 00:42:50.004865 systemd-networkd[1026]: lxc764b5073bc85: Gained IPv6LL May 13 00:42:50.068831 systemd-networkd[1026]: lxc5312482bb179: Gained IPv6LL May 13 00:42:51.508325 env[1208]: time="2025-05-13T00:42:51.508229208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:51.508325 env[1208]: time="2025-05-13T00:42:51.508282187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:51.508325 env[1208]: time="2025-05-13T00:42:51.508291675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:51.508791 env[1208]: time="2025-05-13T00:42:51.508443490Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24d62562c807d12d78a6e13ca8dac270186603129d54e6ae1ff49fb8660dd97c pid=3281 runtime=io.containerd.runc.v2 May 13 00:42:51.522474 systemd[1]: Started cri-containerd-24d62562c807d12d78a6e13ca8dac270186603129d54e6ae1ff49fb8660dd97c.scope. May 13 00:42:51.532819 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:51.542847 env[1208]: time="2025-05-13T00:42:51.542774365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:51.543040 env[1208]: time="2025-05-13T00:42:51.542822455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:51.543040 env[1208]: time="2025-05-13T00:42:51.542838575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:51.543040 env[1208]: time="2025-05-13T00:42:51.542988757Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec2b0ba05d349e93fe75db74b87ef279d6391233c37130c408644124d7e7cb7b pid=3314 runtime=io.containerd.runc.v2 May 13 00:42:51.563534 env[1208]: time="2025-05-13T00:42:51.563491384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p75jt,Uid:23ce8b23-19bd-490d-8794-9868b23d2613,Namespace:kube-system,Attempt:0,} returns sandbox id \"24d62562c807d12d78a6e13ca8dac270186603129d54e6ae1ff49fb8660dd97c\"" May 13 00:42:51.564686 systemd[1]: Started cri-containerd-ec2b0ba05d349e93fe75db74b87ef279d6391233c37130c408644124d7e7cb7b.scope. May 13 00:42:51.565268 kubelet[2016]: E0513 00:42:51.565244 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:51.567444 env[1208]: time="2025-05-13T00:42:51.567401633Z" level=info msg="CreateContainer within sandbox \"24d62562c807d12d78a6e13ca8dac270186603129d54e6ae1ff49fb8660dd97c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:51.577318 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:51.602309 env[1208]: time="2025-05-13T00:42:51.602250760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-htdjb,Uid:4a9335c3-bdf8-4677-ac55-8993a20b2c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec2b0ba05d349e93fe75db74b87ef279d6391233c37130c408644124d7e7cb7b\"" May 13 00:42:51.603173 kubelet[2016]: E0513 00:42:51.603148 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:51.604701 env[1208]: time="2025-05-13T00:42:51.604654003Z" level=info msg="CreateContainer within sandbox \"ec2b0ba05d349e93fe75db74b87ef279d6391233c37130c408644124d7e7cb7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:51.754891 env[1208]: time="2025-05-13T00:42:51.754838558Z" level=info msg="CreateContainer within sandbox \"24d62562c807d12d78a6e13ca8dac270186603129d54e6ae1ff49fb8660dd97c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4bbdeec0853f6e32efb43130d15a07db63833ff06957ea923c74d539ccacf5b\"" May 13 00:42:51.755402 env[1208]: time="2025-05-13T00:42:51.755345599Z" level=info msg="StartContainer for \"c4bbdeec0853f6e32efb43130d15a07db63833ff06957ea923c74d539ccacf5b\"" May 13 00:42:51.761799 env[1208]: time="2025-05-13T00:42:51.761655785Z" level=info msg="CreateContainer within sandbox \"ec2b0ba05d349e93fe75db74b87ef279d6391233c37130c408644124d7e7cb7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ca82e9d83504c162b20f82245cb93623c4d8b57b3b0a98b7eff9a4e9abacfe7\"" May 13 00:42:51.764795 env[1208]: time="2025-05-13T00:42:51.764736290Z" level=info msg="StartContainer for \"6ca82e9d83504c162b20f82245cb93623c4d8b57b3b0a98b7eff9a4e9abacfe7\"" May 13 00:42:51.770931 systemd[1]: Started cri-containerd-c4bbdeec0853f6e32efb43130d15a07db63833ff06957ea923c74d539ccacf5b.scope. May 13 00:42:51.791071 systemd[1]: Started cri-containerd-6ca82e9d83504c162b20f82245cb93623c4d8b57b3b0a98b7eff9a4e9abacfe7.scope. May 13 00:42:51.798332 env[1208]: time="2025-05-13T00:42:51.798301249Z" level=info msg="StartContainer for \"c4bbdeec0853f6e32efb43130d15a07db63833ff06957ea923c74d539ccacf5b\" returns successfully" May 13 00:42:51.820755 env[1208]: time="2025-05-13T00:42:51.820659535Z" level=info msg="StartContainer for \"6ca82e9d83504c162b20f82245cb93623c4d8b57b3b0a98b7eff9a4e9abacfe7\" returns successfully" May 13 00:42:52.004813 kubelet[2016]: E0513 00:42:52.004768 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:52.006760 kubelet[2016]: E0513 00:42:52.006725 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:52.129338 kubelet[2016]: I0513 00:42:52.129187 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-htdjb" podStartSLOduration=24.129169147 podStartE2EDuration="24.129169147s" podCreationTimestamp="2025-05-13 00:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:52.129088596 +0000 UTC m=+39.531009296" watchObservedRunningTime="2025-05-13 00:42:52.129169147 +0000 UTC m=+39.531089847" May 13 00:42:52.636823 kubelet[2016]: I0513 00:42:52.636755 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p75jt" podStartSLOduration=24.636732141 podStartE2EDuration="24.636732141s" podCreationTimestamp="2025-05-13 00:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:52.372289186 +0000 UTC m=+39.774209886" watchObservedRunningTime="2025-05-13 00:42:52.636732141 +0000 UTC m=+40.038652851" May 13 00:42:53.009034 kubelet[2016]: E0513 00:42:53.009004 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:53.009259 kubelet[2016]: E0513 00:42:53.009169 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:53.634161 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:37216.service. May 13 00:42:53.674965 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 37216 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:53.676224 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:53.679714 systemd-logind[1190]: New session 9 of user core. May 13 00:42:53.680440 systemd[1]: Started session-9.scope. May 13 00:42:53.794453 sshd[3444]: pam_unix(sshd:session): session closed for user core May 13 00:42:53.796658 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:37216.service: Deactivated successfully. May 13 00:42:53.797438 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:42:53.798030 systemd-logind[1190]: Session 9 logged out. Waiting for processes to exit. May 13 00:42:53.798871 systemd-logind[1190]: Removed session 9. May 13 00:42:54.010832 kubelet[2016]: E0513 00:42:54.010801 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:54.011202 kubelet[2016]: E0513 00:42:54.010877 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:58.167813 kubelet[2016]: I0513 00:42:58.167757 2016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:42:58.168506 kubelet[2016]: E0513 00:42:58.168490 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:58.798319 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:37232.service. May 13 00:42:58.836871 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 37232 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:58.837880 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:58.841064 systemd-logind[1190]: New session 10 of user core. May 13 00:42:58.841820 systemd[1]: Started session-10.scope. May 13 00:42:58.948991 sshd[3459]: pam_unix(sshd:session): session closed for user core May 13 00:42:58.950987 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:37232.service: Deactivated successfully. May 13 00:42:58.951875 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:42:58.952482 systemd-logind[1190]: Session 10 logged out. Waiting for processes to exit. May 13 00:42:58.953294 systemd-logind[1190]: Removed session 10. May 13 00:42:59.020271 kubelet[2016]: E0513 00:42:59.020239 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:03.952981 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:37572.service. May 13 00:43:03.990697 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 37572 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:03.991972 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:03.995526 systemd-logind[1190]: New session 11 of user core. May 13 00:43:03.996378 systemd[1]: Started session-11.scope. May 13 00:43:04.111462 sshd[3476]: pam_unix(sshd:session): session closed for user core May 13 00:43:04.115596 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:37586.service. May 13 00:43:04.116231 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:37572.service: Deactivated successfully. May 13 00:43:04.116938 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:43:04.117751 systemd-logind[1190]: Session 11 logged out. Waiting for processes to exit. May 13 00:43:04.118771 systemd-logind[1190]: Removed session 11. May 13 00:43:04.156821 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 37586 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:04.158818 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:04.162250 systemd-logind[1190]: New session 12 of user core. May 13 00:43:04.163174 systemd[1]: Started session-12.scope. May 13 00:43:04.369766 sshd[3489]: pam_unix(sshd:session): session closed for user core May 13 00:43:04.373087 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:37600.service. May 13 00:43:04.373526 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:37586.service: Deactivated successfully. May 13 00:43:04.374149 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:43:04.374752 systemd-logind[1190]: Session 12 logged out. Waiting for processes to exit. May 13 00:43:04.375592 systemd-logind[1190]: Removed session 12. May 13 00:43:04.411452 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 37600 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:04.412637 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:04.415814 systemd-logind[1190]: New session 13 of user core. May 13 00:43:04.416678 systemd[1]: Started session-13.scope. May 13 00:43:04.568759 sshd[3500]: pam_unix(sshd:session): session closed for user core May 13 00:43:04.570862 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:37600.service: Deactivated successfully. May 13 00:43:04.571567 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:43:04.572067 systemd-logind[1190]: Session 13 logged out. Waiting for processes to exit. May 13 00:43:04.572907 systemd-logind[1190]: Removed session 13. May 13 00:43:09.573227 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:37610.service. May 13 00:43:09.610247 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 37610 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:09.611387 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:09.614805 systemd-logind[1190]: New session 14 of user core. May 13 00:43:09.615657 systemd[1]: Started session-14.scope. May 13 00:43:09.728663 sshd[3514]: pam_unix(sshd:session): session closed for user core May 13 00:43:09.730912 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:37610.service: Deactivated successfully. May 13 00:43:09.731591 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:43:09.732168 systemd-logind[1190]: Session 14 logged out. Waiting for processes to exit. May 13 00:43:09.732922 systemd-logind[1190]: Removed session 14. May 13 00:43:14.733029 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:46390.service. May 13 00:43:14.770439 sshd[3532]: Accepted publickey for core from 10.0.0.1 port 46390 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:14.771795 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:14.775313 systemd-logind[1190]: New session 15 of user core. May 13 00:43:14.776180 systemd[1]: Started session-15.scope. May 13 00:43:14.885840 sshd[3532]: pam_unix(sshd:session): session closed for user core May 13 00:43:14.888428 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:46390.service: Deactivated successfully. May 13 00:43:14.889149 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:43:14.889799 systemd-logind[1190]: Session 15 logged out. Waiting for processes to exit. May 13 00:43:14.890492 systemd-logind[1190]: Removed session 15. May 13 00:43:19.891238 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:46400.service. May 13 00:43:19.931406 sshd[3545]: Accepted publickey for core from 10.0.0.1 port 46400 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:19.932805 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:19.936348 systemd-logind[1190]: New session 16 of user core. May 13 00:43:19.937353 systemd[1]: Started session-16.scope. May 13 00:43:20.038136 sshd[3545]: pam_unix(sshd:session): session closed for user core May 13 00:43:20.041079 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:46400.service: Deactivated successfully. May 13 00:43:20.041740 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:43:20.042321 systemd-logind[1190]: Session 16 logged out. Waiting for processes to exit. May 13 00:43:20.043459 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:46416.service. May 13 00:43:20.044818 systemd-logind[1190]: Removed session 16. May 13 00:43:20.080298 sshd[3558]: Accepted publickey for core from 10.0.0.1 port 46416 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:20.081253 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:20.084462 systemd-logind[1190]: New session 17 of user core. May 13 00:43:20.085297 systemd[1]: Started session-17.scope. May 13 00:43:20.388616 sshd[3558]: pam_unix(sshd:session): session closed for user core May 13 00:43:20.391701 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:46416.service: Deactivated successfully. May 13 00:43:20.392304 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:43:20.392915 systemd-logind[1190]: Session 17 logged out. Waiting for processes to exit. May 13 00:43:20.394313 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:46424.service. May 13 00:43:20.395189 systemd-logind[1190]: Removed session 17. May 13 00:43:20.434024 sshd[3569]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:20.435133 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:20.438276 systemd-logind[1190]: New session 18 of user core. May 13 00:43:20.439097 systemd[1]: Started session-18.scope. May 13 00:43:22.042266 sshd[3569]: pam_unix(sshd:session): session closed for user core May 13 00:43:22.047018 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:46440.service. May 13 00:43:22.047473 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:46424.service: Deactivated successfully. May 13 00:43:22.049398 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:43:22.050473 systemd-logind[1190]: Session 18 logged out. Waiting for processes to exit. May 13 00:43:22.051906 systemd-logind[1190]: Removed session 18. May 13 00:43:22.085163 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 46440 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:22.086338 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:22.090177 systemd-logind[1190]: New session 19 of user core. May 13 00:43:22.091003 systemd[1]: Started session-19.scope. May 13 00:43:22.341906 sshd[3605]: pam_unix(sshd:session): session closed for user core May 13 00:43:22.345974 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:46440.service: Deactivated successfully. May 13 00:43:22.346760 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:43:22.349003 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:46444.service. May 13 00:43:22.349883 systemd-logind[1190]: Session 19 logged out. Waiting for processes to exit. May 13 00:43:22.350901 systemd-logind[1190]: Removed session 19. May 13 00:43:22.391047 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 46444 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:22.392504 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:22.396318 systemd-logind[1190]: New session 20 of user core. May 13 00:43:22.397158 systemd[1]: Started session-20.scope. May 13 00:43:22.518853 sshd[3618]: pam_unix(sshd:session): session closed for user core May 13 00:43:22.521445 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:46444.service: Deactivated successfully. May 13 00:43:22.522278 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:43:22.522875 systemd-logind[1190]: Session 20 logged out. Waiting for processes to exit. May 13 00:43:22.523659 systemd-logind[1190]: Removed session 20. May 13 00:43:25.666463 kubelet[2016]: E0513 00:43:25.666414 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:27.523122 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:36152.service. May 13 00:43:27.560858 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 36152 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:27.562002 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:27.565172 systemd-logind[1190]: New session 21 of user core. May 13 00:43:27.566187 systemd[1]: Started session-21.scope. May 13 00:43:27.672523 sshd[3631]: pam_unix(sshd:session): session closed for user core May 13 00:43:27.675172 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:36152.service: Deactivated successfully. May 13 00:43:27.675905 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:43:27.676519 systemd-logind[1190]: Session 21 logged out. Waiting for processes to exit. May 13 00:43:27.677468 systemd-logind[1190]: Removed session 21. May 13 00:43:32.676569 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:36164.service. May 13 00:43:32.715350 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 36164 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:32.716520 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:32.720274 systemd-logind[1190]: New session 22 of user core. May 13 00:43:32.721116 systemd[1]: Started session-22.scope. May 13 00:43:32.827116 sshd[3650]: pam_unix(sshd:session): session closed for user core May 13 00:43:32.829853 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:36164.service: Deactivated successfully. May 13 00:43:32.830555 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:43:32.831092 systemd-logind[1190]: Session 22 logged out. Waiting for processes to exit. May 13 00:43:32.831730 systemd-logind[1190]: Removed session 22. May 13 00:43:35.666300 kubelet[2016]: E0513 00:43:35.666249 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:36.666166 kubelet[2016]: E0513 00:43:36.666112 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:37.831821 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:47310.service. May 13 00:43:37.868280 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 47310 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:37.869335 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:37.872486 systemd-logind[1190]: New session 23 of user core. May 13 00:43:37.873489 systemd[1]: Started session-23.scope. May 13 00:43:37.971170 sshd[3663]: pam_unix(sshd:session): session closed for user core May 13 00:43:37.973138 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:47310.service: Deactivated successfully. May 13 00:43:37.973908 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:43:37.974383 systemd-logind[1190]: Session 23 logged out. Waiting for processes to exit. May 13 00:43:37.975085 systemd-logind[1190]: Removed session 23. May 13 00:43:42.975422 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:47314.service. May 13 00:43:43.013165 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 47314 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:43.014219 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:43.017317 systemd-logind[1190]: New session 24 of user core. May 13 00:43:43.018059 systemd[1]: Started session-24.scope. May 13 00:43:43.114614 sshd[3677]: pam_unix(sshd:session): session closed for user core May 13 00:43:43.117073 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:47314.service: Deactivated successfully. May 13 00:43:43.117613 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:43:43.118026 systemd-logind[1190]: Session 24 logged out. Waiting for processes to exit. May 13 00:43:43.119038 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:47328.service. May 13 00:43:43.119958 systemd-logind[1190]: Removed session 24. May 13 00:43:43.156672 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:43.157897 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:43.161536 systemd-logind[1190]: New session 25 of user core. May 13 00:43:43.162306 systemd[1]: Started session-25.scope. May 13 00:43:44.479683 env[1208]: time="2025-05-13T00:43:44.479601217Z" level=info msg="StopContainer for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" with timeout 30 (s)" May 13 00:43:44.480169 env[1208]: time="2025-05-13T00:43:44.480000541Z" level=info msg="Stop container \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" with signal terminated" May 13 00:43:44.490221 systemd[1]: cri-containerd-129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5.scope: Deactivated successfully. May 13 00:43:44.505347 env[1208]: time="2025-05-13T00:43:44.505215275Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:43:44.508915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5-rootfs.mount: Deactivated successfully. May 13 00:43:44.512639 env[1208]: time="2025-05-13T00:43:44.512596089Z" level=info msg="StopContainer for \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\" with timeout 2 (s)" May 13 00:43:44.512807 env[1208]: time="2025-05-13T00:43:44.512795771Z" level=info msg="Stop container \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\" with signal terminated" May 13 00:43:44.516044 env[1208]: time="2025-05-13T00:43:44.515980840Z" level=info msg="shim disconnected" id=129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5 May 13 00:43:44.516044 env[1208]: time="2025-05-13T00:43:44.516023792Z" level=warning msg="cleaning up after shim disconnected" id=129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5 namespace=k8s.io May 13 00:43:44.516044 env[1208]: time="2025-05-13T00:43:44.516031888Z" level=info msg="cleaning up dead shim" May 13 00:43:44.518111 systemd-networkd[1026]: lxc_health: Link DOWN May 13 00:43:44.518118 systemd-networkd[1026]: lxc_health: Lost carrier May 13 00:43:44.522979 env[1208]: time="2025-05-13T00:43:44.522936149Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3743 runtime=io.containerd.runc.v2\n" May 13 00:43:44.526104 env[1208]: time="2025-05-13T00:43:44.526051605Z" level=info msg="StopContainer for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" returns successfully" May 13 00:43:44.526798 env[1208]: time="2025-05-13T00:43:44.526764900Z" level=info msg="StopPodSandbox for \"671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83\"" May 13 00:43:44.526852 env[1208]: time="2025-05-13T00:43:44.526834314Z" level=info msg="Container to stop \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:44.528652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83-shm.mount: Deactivated successfully. May 13 00:43:44.532934 systemd[1]: cri-containerd-671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83.scope: Deactivated successfully. May 13 00:43:44.551668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83-rootfs.mount: Deactivated successfully. May 13 00:43:44.552273 systemd[1]: cri-containerd-678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8.scope: Deactivated successfully. May 13 00:43:44.552511 systemd[1]: cri-containerd-678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8.scope: Consumed 6.008s CPU time. May 13 00:43:44.559396 env[1208]: time="2025-05-13T00:43:44.559348104Z" level=info msg="shim disconnected" id=671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83 May 13 00:43:44.559396 env[1208]: time="2025-05-13T00:43:44.559394423Z" level=warning msg="cleaning up after shim disconnected" id=671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83 namespace=k8s.io May 13 00:43:44.559396 env[1208]: time="2025-05-13T00:43:44.559403470Z" level=info msg="cleaning up dead shim" May 13 00:43:44.566343 env[1208]: time="2025-05-13T00:43:44.566292452Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3781 runtime=io.containerd.runc.v2\n" May 13 00:43:44.566723 env[1208]: time="2025-05-13T00:43:44.566683470Z" level=info msg="TearDown network for sandbox \"671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83\" successfully" May 13 00:43:44.566723 env[1208]: time="2025-05-13T00:43:44.566715653Z" level=info msg="StopPodSandbox for \"671991e59401fc0a146e56791c8db48bc57f43f565fd06ce25200c1b52657a83\" returns successfully" May 13 00:43:44.570016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8-rootfs.mount: Deactivated successfully. May 13 00:43:44.732968 env[1208]: time="2025-05-13T00:43:44.732823142Z" level=info msg="shim disconnected" id=678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8 May 13 00:43:44.732968 env[1208]: time="2025-05-13T00:43:44.732873298Z" level=warning msg="cleaning up after shim disconnected" id=678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8 namespace=k8s.io May 13 00:43:44.732968 env[1208]: time="2025-05-13T00:43:44.732885491Z" level=info msg="cleaning up dead shim" May 13 00:43:44.739923 env[1208]: time="2025-05-13T00:43:44.739858023Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3800 runtime=io.containerd.runc.v2\n" May 13 00:43:44.760996 kubelet[2016]: I0513 00:43:44.760956 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzg6w\" (UniqueName: \"kubernetes.io/projected/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-kube-api-access-dzg6w\") pod \"d4410601-a63d-4e6a-8b6a-26cd9cc51ca7\" (UID: \"d4410601-a63d-4e6a-8b6a-26cd9cc51ca7\") " May 13 00:43:44.760996 kubelet[2016]: I0513 00:43:44.760999 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-cilium-config-path\") pod \"d4410601-a63d-4e6a-8b6a-26cd9cc51ca7\" (UID: \"d4410601-a63d-4e6a-8b6a-26cd9cc51ca7\") " May 13 00:43:44.762892 kubelet[2016]: I0513 00:43:44.762866 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d4410601-a63d-4e6a-8b6a-26cd9cc51ca7" (UID: "d4410601-a63d-4e6a-8b6a-26cd9cc51ca7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:44.803849 kubelet[2016]: I0513 00:43:44.803795 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-kube-api-access-dzg6w" (OuterVolumeSpecName: "kube-api-access-dzg6w") pod "d4410601-a63d-4e6a-8b6a-26cd9cc51ca7" (UID: "d4410601-a63d-4e6a-8b6a-26cd9cc51ca7"). InnerVolumeSpecName "kube-api-access-dzg6w". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:44.862058 kubelet[2016]: I0513 00:43:44.862010 2016 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dzg6w\" (UniqueName: \"kubernetes.io/projected/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-kube-api-access-dzg6w\") on node \"localhost\" DevicePath \"\"" May 13 00:43:44.862058 kubelet[2016]: I0513 00:43:44.862049 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:44.890759 env[1208]: time="2025-05-13T00:43:44.890699980Z" level=info msg="StopContainer for \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\" returns successfully" May 13 00:43:44.891281 env[1208]: time="2025-05-13T00:43:44.891250334Z" level=info msg="StopPodSandbox for \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\"" May 13 00:43:44.891407 env[1208]: time="2025-05-13T00:43:44.891381495Z" level=info msg="Container to stop \"3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:44.891474 env[1208]: time="2025-05-13T00:43:44.891406523Z" level=info msg="Container to stop \"6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:44.891474 env[1208]: time="2025-05-13T00:43:44.891418445Z" level=info msg="Container to stop \"7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:44.891474 env[1208]: time="2025-05-13T00:43:44.891428175Z" level=info msg="Container to stop \"3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:44.891474 env[1208]: time="2025-05-13T00:43:44.891438163Z" level=info msg="Container to stop \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:44.897654 systemd[1]: cri-containerd-20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81.scope: Deactivated successfully. May 13 00:43:45.098071 env[1208]: time="2025-05-13T00:43:45.097731464Z" level=info msg="shim disconnected" id=20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81 May 13 00:43:45.098071 env[1208]: time="2025-05-13T00:43:45.097914624Z" level=warning msg="cleaning up after shim disconnected" id=20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81 namespace=k8s.io May 13 00:43:45.098071 env[1208]: time="2025-05-13T00:43:45.097925084Z" level=info msg="cleaning up dead shim" May 13 00:43:45.105400 env[1208]: time="2025-05-13T00:43:45.105355075Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3831 runtime=io.containerd.runc.v2\n" May 13 00:43:45.105770 env[1208]: time="2025-05-13T00:43:45.105721927Z" level=info msg="TearDown network for sandbox \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" successfully" May 13 00:43:45.105770 env[1208]: time="2025-05-13T00:43:45.105758446Z" level=info msg="StopPodSandbox for \"20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81\" returns successfully" May 13 00:43:45.107552 kubelet[2016]: I0513 00:43:45.107521 2016 scope.go:117] "RemoveContainer" containerID="129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5" May 13 00:43:45.110031 env[1208]: time="2025-05-13T00:43:45.109982281Z" level=info msg="RemoveContainer for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\"" May 13 00:43:45.111818 systemd[1]: Removed slice kubepods-besteffort-podd4410601_a63d_4e6a_8b6a_26cd9cc51ca7.slice. May 13 00:43:45.118840 env[1208]: time="2025-05-13T00:43:45.118788475Z" level=info msg="RemoveContainer for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" returns successfully" May 13 00:43:45.119199 kubelet[2016]: I0513 00:43:45.119163 2016 scope.go:117] "RemoveContainer" containerID="129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5" May 13 00:43:45.119475 env[1208]: time="2025-05-13T00:43:45.119400006Z" level=error msg="ContainerStatus for \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\": not found" May 13 00:43:45.119638 kubelet[2016]: E0513 00:43:45.119594 2016 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\": not found" containerID="129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5" May 13 00:43:45.119740 kubelet[2016]: I0513 00:43:45.119621 2016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5"} err="failed to get container status \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"129bdf7afbbd77a71aa30a7db65fce65acea1cbd67a7a08876c3442a3630b5a5\": not found" May 13 00:43:45.263718 kubelet[2016]: I0513 00:43:45.263668 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-kernel\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263718 kubelet[2016]: I0513 00:43:45.263713 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hostproc\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263718 kubelet[2016]: I0513 00:43:45.263728 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-cgroup\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263956 kubelet[2016]: I0513 00:43:45.263740 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-xtables-lock\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263956 kubelet[2016]: I0513 00:43:45.263753 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-bpf-maps\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263956 kubelet[2016]: I0513 00:43:45.263765 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-run\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263956 kubelet[2016]: I0513 00:43:45.263780 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-etc-cni-netd\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263956 kubelet[2016]: I0513 00:43:45.263806 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-config-path\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.263956 kubelet[2016]: I0513 00:43:45.263826 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cni-path\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.264092 kubelet[2016]: I0513 00:43:45.263841 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-net\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.264092 kubelet[2016]: I0513 00:43:45.263831 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264092 kubelet[2016]: I0513 00:43:45.263861 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hostproc" (OuterVolumeSpecName: "hostproc") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264092 kubelet[2016]: I0513 00:43:45.263857 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btj5t\" (UniqueName: \"kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-kube-api-access-btj5t\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.264092 kubelet[2016]: I0513 00:43:45.263936 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-lib-modules\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.264211 kubelet[2016]: I0513 00:43:45.263961 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hubble-tls\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.264211 kubelet[2016]: I0513 00:43:45.263981 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ff35182-bd26-4292-af67-3cfa5d3cc38c-clustermesh-secrets\") pod \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\" (UID: \"8ff35182-bd26-4292-af67-3cfa5d3cc38c\") " May 13 00:43:45.264211 kubelet[2016]: I0513 00:43:45.264027 2016 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.264211 kubelet[2016]: I0513 00:43:45.264040 2016 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.264211 kubelet[2016]: I0513 00:43:45.263829 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264211 kubelet[2016]: I0513 00:43:45.263905 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264348 kubelet[2016]: I0513 00:43:45.263917 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264348 kubelet[2016]: I0513 00:43:45.263926 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264348 kubelet[2016]: I0513 00:43:45.263936 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264348 kubelet[2016]: I0513 00:43:45.263950 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cni-path" (OuterVolumeSpecName: "cni-path") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264348 kubelet[2016]: I0513 00:43:45.264145 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.264470 kubelet[2016]: I0513 00:43:45.264210 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:45.266487 kubelet[2016]: I0513 00:43:45.266455 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:45.267177 kubelet[2016]: I0513 00:43:45.267148 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-kube-api-access-btj5t" (OuterVolumeSpecName: "kube-api-access-btj5t") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "kube-api-access-btj5t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:45.267250 kubelet[2016]: I0513 00:43:45.267232 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:45.267767 kubelet[2016]: I0513 00:43:45.267735 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff35182-bd26-4292-af67-3cfa5d3cc38c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8ff35182-bd26-4292-af67-3cfa5d3cc38c" (UID: "8ff35182-bd26-4292-af67-3cfa5d3cc38c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365115 2016 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365153 2016 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365162 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365171 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365182 2016 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365192 2016 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365199 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365247 kubelet[2016]: I0513 00:43:45.365205 2016 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365598 kubelet[2016]: I0513 00:43:45.365236 2016 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-btj5t\" (UniqueName: \"kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-kube-api-access-btj5t\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365598 kubelet[2016]: I0513 00:43:45.365243 2016 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ff35182-bd26-4292-af67-3cfa5d3cc38c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365598 kubelet[2016]: I0513 00:43:45.365262 2016 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8ff35182-bd26-4292-af67-3cfa5d3cc38c-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.365598 kubelet[2016]: I0513 00:43:45.365269 2016 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8ff35182-bd26-4292-af67-3cfa5d3cc38c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:45.489868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81-rootfs.mount: Deactivated successfully. May 13 00:43:45.489980 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20f6137b3e3aa59e2bc4190273b809fb8f2753c4c2b66407fde2c3dd6133db81-shm.mount: Deactivated successfully. May 13 00:43:45.490041 systemd[1]: var-lib-kubelet-pods-8ff35182\x2dbd26\x2d4292\x2daf67\x2d3cfa5d3cc38c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtj5t.mount: Deactivated successfully. May 13 00:43:45.490100 systemd[1]: var-lib-kubelet-pods-d4410601\x2da63d\x2d4e6a\x2d8b6a\x2d26cd9cc51ca7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddzg6w.mount: Deactivated successfully. May 13 00:43:45.490151 systemd[1]: var-lib-kubelet-pods-8ff35182\x2dbd26\x2d4292\x2daf67\x2d3cfa5d3cc38c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:45.490206 systemd[1]: var-lib-kubelet-pods-8ff35182\x2dbd26\x2d4292\x2daf67\x2d3cfa5d3cc38c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:45.666849 kubelet[2016]: E0513 00:43:45.666691 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:46.112338 kubelet[2016]: I0513 00:43:46.112310 2016 scope.go:117] "RemoveContainer" containerID="678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8" May 13 00:43:46.113684 env[1208]: time="2025-05-13T00:43:46.113338705Z" level=info msg="RemoveContainer for \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\"" May 13 00:43:46.116612 systemd[1]: Removed slice kubepods-burstable-pod8ff35182_bd26_4292_af67_3cfa5d3cc38c.slice. May 13 00:43:46.116717 systemd[1]: kubepods-burstable-pod8ff35182_bd26_4292_af67_3cfa5d3cc38c.slice: Consumed 6.099s CPU time. May 13 00:43:46.117461 env[1208]: time="2025-05-13T00:43:46.117414331Z" level=info msg="RemoveContainer for \"678a0a215a6820b4fa7caf6ba6f2b1a3d5fe9dafb72efa9261a05936b9affcc8\" returns successfully" May 13 00:43:46.117637 kubelet[2016]: I0513 00:43:46.117616 2016 scope.go:117] "RemoveContainer" containerID="3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918" May 13 00:43:46.118556 env[1208]: time="2025-05-13T00:43:46.118528233Z" level=info msg="RemoveContainer for \"3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918\"" May 13 00:43:46.122237 env[1208]: time="2025-05-13T00:43:46.122198934Z" level=info msg="RemoveContainer for \"3522fdcc1f50618aa3a53992d875ba40303951015c0219768831c3247a777918\" returns successfully" May 13 00:43:46.122449 kubelet[2016]: I0513 00:43:46.122427 2016 scope.go:117] "RemoveContainer" containerID="7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682" May 13 00:43:46.123367 env[1208]: time="2025-05-13T00:43:46.123344266Z" level=info msg="RemoveContainer for \"7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682\"" May 13 00:43:46.128453 env[1208]: time="2025-05-13T00:43:46.128388356Z" level=info msg="RemoveContainer for \"7d596960ab5597b9bc7b316937454de7309ea0511b591ef551d3e61f4476b682\" returns successfully" May 13 00:43:46.128657 kubelet[2016]: I0513 00:43:46.128633 2016 scope.go:117] "RemoveContainer" containerID="6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4" May 13 00:43:46.132387 env[1208]: time="2025-05-13T00:43:46.132318184Z" level=info msg="RemoveContainer for \"6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4\"" May 13 00:43:46.135413 env[1208]: time="2025-05-13T00:43:46.135380211Z" level=info msg="RemoveContainer for \"6d821bee26666cfafd9cf698052561f2daf044790a8261592b45371fd982d0e4\" returns successfully" May 13 00:43:46.135560 kubelet[2016]: I0513 00:43:46.135524 2016 scope.go:117] "RemoveContainer" containerID="3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc" May 13 00:43:46.136431 env[1208]: time="2025-05-13T00:43:46.136404470Z" level=info msg="RemoveContainer for \"3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc\"" May 13 00:43:46.139018 env[1208]: time="2025-05-13T00:43:46.138996819Z" level=info msg="RemoveContainer for \"3941ba6c1537080e80f007315fb77cd8d208ae50b8b49c6662430a2d952db6cc\" returns successfully" May 13 00:43:46.452828 sshd[3690]: pam_unix(sshd:session): session closed for user core May 13 00:43:46.456250 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:51740.service. May 13 00:43:46.456732 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:47328.service: Deactivated successfully. May 13 00:43:46.457210 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:43:46.457723 systemd-logind[1190]: Session 25 logged out. Waiting for processes to exit. May 13 00:43:46.458649 systemd-logind[1190]: Removed session 25. May 13 00:43:46.494710 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 51740 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:46.495644 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:46.499227 systemd-logind[1190]: New session 26 of user core. May 13 00:43:46.499997 systemd[1]: Started session-26.scope. May 13 00:43:46.667904 kubelet[2016]: I0513 00:43:46.667852 2016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" path="/var/lib/kubelet/pods/8ff35182-bd26-4292-af67-3cfa5d3cc38c/volumes" May 13 00:43:46.668433 kubelet[2016]: I0513 00:43:46.668405 2016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4410601-a63d-4e6a-8b6a-26cd9cc51ca7" path="/var/lib/kubelet/pods/d4410601-a63d-4e6a-8b6a-26cd9cc51ca7/volumes" May 13 00:43:47.086919 sshd[3849]: pam_unix(sshd:session): session closed for user core May 13 00:43:47.090377 systemd[1]: Started sshd@26-10.0.0.59:22-10.0.0.1:51742.service. May 13 00:43:47.094865 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:51740.service: Deactivated successfully. May 13 00:43:47.095645 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:43:47.096708 systemd-logind[1190]: Session 26 logged out. Waiting for processes to exit. May 13 00:43:47.097554 systemd-logind[1190]: Removed session 26. May 13 00:43:47.114935 kubelet[2016]: I0513 00:43:47.113452 2016 topology_manager.go:215] "Topology Admit Handler" podUID="fb0da15f-ce22-4c29-b41e-bff78ed4339c" podNamespace="kube-system" podName="cilium-slghs" May 13 00:43:47.114935 kubelet[2016]: E0513 00:43:47.113499 2016 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" containerName="mount-cgroup" May 13 00:43:47.114935 kubelet[2016]: E0513 00:43:47.113506 2016 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d4410601-a63d-4e6a-8b6a-26cd9cc51ca7" containerName="cilium-operator" May 13 00:43:47.114935 kubelet[2016]: E0513 00:43:47.113513 2016 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" containerName="cilium-agent" May 13 00:43:47.114935 kubelet[2016]: E0513 00:43:47.113519 2016 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" containerName="apply-sysctl-overwrites" May 13 00:43:47.114935 kubelet[2016]: E0513 00:43:47.113525 2016 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" containerName="mount-bpf-fs" May 13 00:43:47.114935 kubelet[2016]: E0513 00:43:47.113530 2016 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" containerName="clean-cilium-state" May 13 00:43:47.114935 kubelet[2016]: I0513 00:43:47.113558 2016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4410601-a63d-4e6a-8b6a-26cd9cc51ca7" containerName="cilium-operator" May 13 00:43:47.114935 kubelet[2016]: I0513 00:43:47.113564 2016 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff35182-bd26-4292-af67-3cfa5d3cc38c" containerName="cilium-agent" May 13 00:43:47.119877 systemd[1]: Created slice kubepods-burstable-podfb0da15f_ce22_4c29_b41e_bff78ed4339c.slice. May 13 00:43:47.134382 sshd[3860]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:47.135889 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:47.141319 systemd[1]: Started session-27.scope. May 13 00:43:47.142221 systemd-logind[1190]: New session 27 of user core. May 13 00:43:47.256867 sshd[3860]: pam_unix(sshd:session): session closed for user core May 13 00:43:47.259845 systemd[1]: sshd@26-10.0.0.59:22-10.0.0.1:51742.service: Deactivated successfully. May 13 00:43:47.260395 systemd[1]: session-27.scope: Deactivated successfully. May 13 00:43:47.262098 systemd[1]: Started sshd@27-10.0.0.59:22-10.0.0.1:51750.service. May 13 00:43:47.264722 systemd-logind[1190]: Session 27 logged out. Waiting for processes to exit. May 13 00:43:47.267169 systemd-logind[1190]: Removed session 27. May 13 00:43:47.272419 kubelet[2016]: E0513 00:43:47.272364 2016 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-w64dd lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-slghs" podUID="fb0da15f-ce22-4c29-b41e-bff78ed4339c" May 13 00:43:47.276167 kubelet[2016]: I0513 00:43:47.276125 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cni-path\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276167 kubelet[2016]: I0513 00:43:47.276164 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-ipsec-secrets\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276270 kubelet[2016]: I0513 00:43:47.276180 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-kernel\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276270 kubelet[2016]: I0513 00:43:47.276193 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w64dd\" (UniqueName: \"kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-kube-api-access-w64dd\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276270 kubelet[2016]: I0513 00:43:47.276208 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-etc-cni-netd\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276270 kubelet[2016]: I0513 00:43:47.276221 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-run\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276270 kubelet[2016]: I0513 00:43:47.276237 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hostproc\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276270 kubelet[2016]: I0513 00:43:47.276250 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-xtables-lock\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276406 kubelet[2016]: I0513 00:43:47.276262 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-lib-modules\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276406 kubelet[2016]: I0513 00:43:47.276274 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-cgroup\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276406 kubelet[2016]: I0513 00:43:47.276286 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-clustermesh-secrets\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276406 kubelet[2016]: I0513 00:43:47.276300 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-config-path\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276406 kubelet[2016]: I0513 00:43:47.276314 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hubble-tls\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276406 kubelet[2016]: I0513 00:43:47.276326 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-bpf-maps\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.276619 kubelet[2016]: I0513 00:43:47.276339 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-net\") pod \"cilium-slghs\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " pod="kube-system/cilium-slghs" May 13 00:43:47.302406 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:47.303609 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:47.308431 systemd-logind[1190]: New session 28 of user core. May 13 00:43:47.309196 systemd[1]: Started session-28.scope. May 13 00:43:47.712068 kubelet[2016]: E0513 00:43:47.712024 2016 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:48.281659 kubelet[2016]: I0513 00:43:48.281570 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-clustermesh-secrets\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.281659 kubelet[2016]: I0513 00:43:48.281656 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-run\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282023 kubelet[2016]: I0513 00:43:48.281675 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-cgroup\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282023 kubelet[2016]: I0513 00:43:48.281695 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-kernel\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282023 kubelet[2016]: I0513 00:43:48.281718 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-net\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282023 kubelet[2016]: I0513 00:43:48.281736 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cni-path\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282023 kubelet[2016]: I0513 00:43:48.281751 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-xtables-lock\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282023 kubelet[2016]: I0513 00:43:48.281770 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w64dd\" (UniqueName: \"kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-kube-api-access-w64dd\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282180 kubelet[2016]: I0513 00:43:48.281788 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hubble-tls\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282180 kubelet[2016]: I0513 00:43:48.281802 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-etc-cni-netd\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282180 kubelet[2016]: I0513 00:43:48.281817 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-lib-modules\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282180 kubelet[2016]: I0513 00:43:48.281755 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.282180 kubelet[2016]: I0513 00:43:48.281831 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-config-path\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282180 kubelet[2016]: I0513 00:43:48.281781 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.282329 kubelet[2016]: I0513 00:43:48.281847 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-ipsec-secrets\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282329 kubelet[2016]: I0513 00:43:48.281860 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hostproc\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282329 kubelet[2016]: I0513 00:43:48.281873 2016 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-bpf-maps\") pod \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\" (UID: \"fb0da15f-ce22-4c29-b41e-bff78ed4339c\") " May 13 00:43:48.282329 kubelet[2016]: I0513 00:43:48.281903 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.282329 kubelet[2016]: I0513 00:43:48.281911 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.283652 kubelet[2016]: I0513 00:43:48.281796 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cni-path" (OuterVolumeSpecName: "cni-path") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.283652 kubelet[2016]: I0513 00:43:48.281806 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.283652 kubelet[2016]: I0513 00:43:48.281816 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.283792 kubelet[2016]: I0513 00:43:48.281857 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.283792 kubelet[2016]: I0513 00:43:48.281954 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.283792 kubelet[2016]: I0513 00:43:48.283596 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:48.283792 kubelet[2016]: I0513 00:43:48.283615 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.283792 kubelet[2016]: I0513 00:43:48.283624 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.285269 kubelet[2016]: I0513 00:43:48.285246 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-kube-api-access-w64dd" (OuterVolumeSpecName: "kube-api-access-w64dd") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "kube-api-access-w64dd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:48.285338 kubelet[2016]: I0513 00:43:48.285278 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hostproc" (OuterVolumeSpecName: "hostproc") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:48.285448 systemd[1]: var-lib-kubelet-pods-fb0da15f\x2dce22\x2d4c29\x2db41e\x2dbff78ed4339c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:48.285709 kubelet[2016]: I0513 00:43:48.285642 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:48.286627 kubelet[2016]: I0513 00:43:48.286604 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:48.287024 kubelet[2016]: I0513 00:43:48.286994 2016 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fb0da15f-ce22-4c29-b41e-bff78ed4339c" (UID: "fb0da15f-ce22-4c29-b41e-bff78ed4339c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:48.287290 systemd[1]: var-lib-kubelet-pods-fb0da15f\x2dce22\x2d4c29\x2db41e\x2dbff78ed4339c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw64dd.mount: Deactivated successfully. May 13 00:43:48.287366 systemd[1]: var-lib-kubelet-pods-fb0da15f\x2dce22\x2d4c29\x2db41e\x2dbff78ed4339c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:48.287420 systemd[1]: var-lib-kubelet-pods-fb0da15f\x2dce22\x2d4c29\x2db41e\x2dbff78ed4339c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:43:48.382279 kubelet[2016]: I0513 00:43:48.382227 2016 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382343 2016 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382383 2016 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382393 2016 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382402 2016 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382413 2016 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382421 2016 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382428 2016 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382482 kubelet[2016]: I0513 00:43:48.382437 2016 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w64dd\" (UniqueName: \"kubernetes.io/projected/fb0da15f-ce22-4c29-b41e-bff78ed4339c-kube-api-access-w64dd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382707 kubelet[2016]: I0513 00:43:48.382445 2016 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382707 kubelet[2016]: I0513 00:43:48.382467 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382707 kubelet[2016]: I0513 00:43:48.382474 2016 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb0da15f-ce22-4c29-b41e-bff78ed4339c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.382707 kubelet[2016]: I0513 00:43:48.382484 2016 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0da15f-ce22-4c29-b41e-bff78ed4339c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:48.670187 systemd[1]: Removed slice kubepods-burstable-podfb0da15f_ce22_4c29_b41e_bff78ed4339c.slice. May 13 00:43:49.154056 kubelet[2016]: I0513 00:43:49.154000 2016 topology_manager.go:215] "Topology Admit Handler" podUID="09db3306-08fc-4308-acb9-cc0ca6c72e2e" podNamespace="kube-system" podName="cilium-nhh4l" May 13 00:43:49.165155 systemd[1]: Created slice kubepods-burstable-pod09db3306_08fc_4308_acb9_cc0ca6c72e2e.slice. May 13 00:43:49.287048 kubelet[2016]: I0513 00:43:49.286984 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjxdc\" (UniqueName: \"kubernetes.io/projected/09db3306-08fc-4308-acb9-cc0ca6c72e2e-kube-api-access-pjxdc\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287048 kubelet[2016]: I0513 00:43:49.287026 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-cilium-run\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287048 kubelet[2016]: I0513 00:43:49.287044 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-cilium-cgroup\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287048 kubelet[2016]: I0513 00:43:49.287058 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-cni-path\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287567 kubelet[2016]: I0513 00:43:49.287072 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-lib-modules\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287567 kubelet[2016]: I0513 00:43:49.287152 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09db3306-08fc-4308-acb9-cc0ca6c72e2e-cilium-config-path\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287567 kubelet[2016]: I0513 00:43:49.287199 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-xtables-lock\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287567 kubelet[2016]: I0513 00:43:49.287235 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-host-proc-sys-net\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287567 kubelet[2016]: I0513 00:43:49.287257 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-bpf-maps\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287567 kubelet[2016]: I0513 00:43:49.287288 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-hostproc\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287736 kubelet[2016]: I0513 00:43:49.287305 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09db3306-08fc-4308-acb9-cc0ca6c72e2e-clustermesh-secrets\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287736 kubelet[2016]: I0513 00:43:49.287323 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09db3306-08fc-4308-acb9-cc0ca6c72e2e-cilium-ipsec-secrets\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287736 kubelet[2016]: I0513 00:43:49.287341 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09db3306-08fc-4308-acb9-cc0ca6c72e2e-hubble-tls\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287736 kubelet[2016]: I0513 00:43:49.287358 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-etc-cni-netd\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.287736 kubelet[2016]: I0513 00:43:49.287376 2016 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09db3306-08fc-4308-acb9-cc0ca6c72e2e-host-proc-sys-kernel\") pod \"cilium-nhh4l\" (UID: \"09db3306-08fc-4308-acb9-cc0ca6c72e2e\") " pod="kube-system/cilium-nhh4l" May 13 00:43:49.468399 kubelet[2016]: E0513 00:43:49.468362 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:49.468989 env[1208]: time="2025-05-13T00:43:49.468947959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhh4l,Uid:09db3306-08fc-4308-acb9-cc0ca6c72e2e,Namespace:kube-system,Attempt:0,}" May 13 00:43:49.499171 env[1208]: time="2025-05-13T00:43:49.499097319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:43:49.499171 env[1208]: time="2025-05-13T00:43:49.499146764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:43:49.499171 env[1208]: time="2025-05-13T00:43:49.499157233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:43:49.499370 env[1208]: time="2025-05-13T00:43:49.499336416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc pid=3903 runtime=io.containerd.runc.v2 May 13 00:43:49.512136 systemd[1]: Started cri-containerd-fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc.scope. May 13 00:43:49.534709 env[1208]: time="2025-05-13T00:43:49.534655992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhh4l,Uid:09db3306-08fc-4308-acb9-cc0ca6c72e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\"" May 13 00:43:49.535367 kubelet[2016]: E0513 00:43:49.535325 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:49.537845 env[1208]: time="2025-05-13T00:43:49.537810188Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:43:49.551226 env[1208]: time="2025-05-13T00:43:49.551184028Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1\"" May 13 00:43:49.551701 env[1208]: time="2025-05-13T00:43:49.551565968Z" level=info msg="StartContainer for \"35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1\"" May 13 00:43:49.564914 systemd[1]: Started cri-containerd-35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1.scope. May 13 00:43:49.588476 env[1208]: time="2025-05-13T00:43:49.588426478Z" level=info msg="StartContainer for \"35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1\" returns successfully" May 13 00:43:49.596232 systemd[1]: cri-containerd-35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1.scope: Deactivated successfully. May 13 00:43:49.627620 env[1208]: time="2025-05-13T00:43:49.627531116Z" level=info msg="shim disconnected" id=35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1 May 13 00:43:49.627620 env[1208]: time="2025-05-13T00:43:49.627614595Z" level=warning msg="cleaning up after shim disconnected" id=35c45b199d6ae76d1ceaf2e88ec040898ea3a27ede4df754659635cb0f64a9f1 namespace=k8s.io May 13 00:43:49.627620 env[1208]: time="2025-05-13T00:43:49.627625486Z" level=info msg="cleaning up dead shim" May 13 00:43:49.636121 env[1208]: time="2025-05-13T00:43:49.636093364Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3989 runtime=io.containerd.runc.v2\n" May 13 00:43:50.128246 kubelet[2016]: E0513 00:43:50.128215 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:50.129771 env[1208]: time="2025-05-13T00:43:50.129704981Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:43:50.141663 env[1208]: time="2025-05-13T00:43:50.141593313Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e\"" May 13 00:43:50.143138 env[1208]: time="2025-05-13T00:43:50.143103186Z" level=info msg="StartContainer for \"b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e\"" May 13 00:43:50.156527 systemd[1]: Started cri-containerd-b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e.scope. May 13 00:43:50.179723 env[1208]: time="2025-05-13T00:43:50.179669304Z" level=info msg="StartContainer for \"b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e\" returns successfully" May 13 00:43:50.185271 systemd[1]: cri-containerd-b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e.scope: Deactivated successfully. May 13 00:43:50.202952 env[1208]: time="2025-05-13T00:43:50.202901931Z" level=info msg="shim disconnected" id=b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e May 13 00:43:50.202952 env[1208]: time="2025-05-13T00:43:50.202942949Z" level=warning msg="cleaning up after shim disconnected" id=b1791a063ebfc278036816b0adf1a99261e11ea10016671ebe8e5509ee766c5e namespace=k8s.io May 13 00:43:50.202952 env[1208]: time="2025-05-13T00:43:50.202951786Z" level=info msg="cleaning up dead shim" May 13 00:43:50.210705 env[1208]: time="2025-05-13T00:43:50.210642613Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4050 runtime=io.containerd.runc.v2\n" May 13 00:43:50.667385 kubelet[2016]: I0513 00:43:50.667346 2016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0da15f-ce22-4c29-b41e-bff78ed4339c" path="/var/lib/kubelet/pods/fb0da15f-ce22-4c29-b41e-bff78ed4339c/volumes" May 13 00:43:51.131317 kubelet[2016]: E0513 00:43:51.131283 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:51.133201 env[1208]: time="2025-05-13T00:43:51.133157697Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:43:51.149903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109112497.mount: Deactivated successfully. May 13 00:43:51.157439 env[1208]: time="2025-05-13T00:43:51.157379352Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2\"" May 13 00:43:51.158068 env[1208]: time="2025-05-13T00:43:51.158018953Z" level=info msg="StartContainer for \"cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2\"" May 13 00:43:51.173343 systemd[1]: Started cri-containerd-cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2.scope. May 13 00:43:51.198000 env[1208]: time="2025-05-13T00:43:51.197962881Z" level=info msg="StartContainer for \"cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2\" returns successfully" May 13 00:43:51.198899 systemd[1]: cri-containerd-cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2.scope: Deactivated successfully. May 13 00:43:51.221876 env[1208]: time="2025-05-13T00:43:51.221833305Z" level=info msg="shim disconnected" id=cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2 May 13 00:43:51.221876 env[1208]: time="2025-05-13T00:43:51.221873793Z" level=warning msg="cleaning up after shim disconnected" id=cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2 namespace=k8s.io May 13 00:43:51.222084 env[1208]: time="2025-05-13T00:43:51.221882459Z" level=info msg="cleaning up dead shim" May 13 00:43:51.228072 env[1208]: time="2025-05-13T00:43:51.228029253Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4106 runtime=io.containerd.runc.v2\n" May 13 00:43:51.392784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf2602375456241e7338d842c64e10a06e431215b4d3183476483710958bfca2-rootfs.mount: Deactivated successfully. May 13 00:43:52.135165 kubelet[2016]: E0513 00:43:52.135117 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:52.136999 env[1208]: time="2025-05-13T00:43:52.136952259Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:43:52.152011 env[1208]: time="2025-05-13T00:43:52.151949655Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9\"" May 13 00:43:52.152625 env[1208]: time="2025-05-13T00:43:52.152566081Z" level=info msg="StartContainer for \"5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9\"" May 13 00:43:52.168691 systemd[1]: Started cri-containerd-5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9.scope. May 13 00:43:52.189750 systemd[1]: cri-containerd-5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9.scope: Deactivated successfully. May 13 00:43:52.191446 env[1208]: time="2025-05-13T00:43:52.191391644Z" level=info msg="StartContainer for \"5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9\" returns successfully" May 13 00:43:52.209820 env[1208]: time="2025-05-13T00:43:52.209759833Z" level=info msg="shim disconnected" id=5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9 May 13 00:43:52.209820 env[1208]: time="2025-05-13T00:43:52.209813646Z" level=warning msg="cleaning up after shim disconnected" id=5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9 namespace=k8s.io May 13 00:43:52.209820 env[1208]: time="2025-05-13T00:43:52.209822814Z" level=info msg="cleaning up dead shim" May 13 00:43:52.215849 env[1208]: time="2025-05-13T00:43:52.215805359Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n" May 13 00:43:52.393415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f6f8a5a4132682611951561c08909d56b8b9f201d1d33b9c460ed9c1b8dfaa9-rootfs.mount: Deactivated successfully. May 13 00:43:52.713163 kubelet[2016]: E0513 00:43:52.713112 2016 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:53.139456 kubelet[2016]: E0513 00:43:53.139338 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:53.141843 env[1208]: time="2025-05-13T00:43:53.141790511Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:43:53.158741 env[1208]: time="2025-05-13T00:43:53.158675778Z" level=info msg="CreateContainer within sandbox \"fb53fdd2c7c6b872e03941957bb162440f631db2b953462807ae1e5f75e77ecc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4fcb4673d547c00d9d32059010cf4b21c6f07c02fd541bf63b62ceb9109044f4\"" May 13 00:43:53.159259 env[1208]: time="2025-05-13T00:43:53.159124624Z" level=info msg="StartContainer for \"4fcb4673d547c00d9d32059010cf4b21c6f07c02fd541bf63b62ceb9109044f4\"" May 13 00:43:53.174793 systemd[1]: Started cri-containerd-4fcb4673d547c00d9d32059010cf4b21c6f07c02fd541bf63b62ceb9109044f4.scope. May 13 00:43:53.198479 env[1208]: time="2025-05-13T00:43:53.198417520Z" level=info msg="StartContainer for \"4fcb4673d547c00d9d32059010cf4b21c6f07c02fd541bf63b62ceb9109044f4\" returns successfully" May 13 00:43:53.471741 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:43:54.144145 kubelet[2016]: E0513 00:43:54.144115 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:54.155853 kubelet[2016]: I0513 00:43:54.155792 2016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nhh4l" podStartSLOduration=5.155772771 podStartE2EDuration="5.155772771s" podCreationTimestamp="2025-05-13 00:43:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:43:54.155521181 +0000 UTC m=+101.557441901" watchObservedRunningTime="2025-05-13 00:43:54.155772771 +0000 UTC m=+101.557693471" May 13 00:43:55.473333 kubelet[2016]: E0513 00:43:55.473289 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:56.082767 systemd-networkd[1026]: lxc_health: Link UP May 13 00:43:56.089272 systemd-networkd[1026]: lxc_health: Gained carrier May 13 00:43:56.089620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:43:56.663704 kubelet[2016]: I0513 00:43:56.663654 2016 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:43:56Z","lastTransitionTime":"2025-05-13T00:43:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:43:56.667526 kubelet[2016]: E0513 00:43:56.667482 2016 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-htdjb" podUID="4a9335c3-bdf8-4677-ac55-8993a20b2c96" May 13 00:43:57.461641 systemd-networkd[1026]: lxc_health: Gained IPv6LL May 13 00:43:57.470623 kubelet[2016]: E0513 00:43:57.470563 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:58.150323 kubelet[2016]: E0513 00:43:58.150261 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:58.666333 kubelet[2016]: E0513 00:43:58.666295 2016 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:44:01.840000 systemd[1]: run-containerd-runc-k8s.io-4fcb4673d547c00d9d32059010cf4b21c6f07c02fd541bf63b62ceb9109044f4-runc.fgD0VW.mount: Deactivated successfully. May 13 00:44:01.887321 sshd[3874]: pam_unix(sshd:session): session closed for user core May 13 00:44:01.889737 systemd[1]: sshd@27-10.0.0.59:22-10.0.0.1:51750.service: Deactivated successfully. May 13 00:44:01.890378 systemd[1]: session-28.scope: Deactivated successfully. May 13 00:44:01.890969 systemd-logind[1190]: Session 28 logged out. Waiting for processes to exit. May 13 00:44:01.891711 systemd-logind[1190]: Removed session 28.