May 13 00:41:14.859338 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:41:14.859355 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:14.859364 kernel: BIOS-provided physical RAM map: May 13 00:41:14.859370 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:41:14.859375 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:41:14.859380 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:41:14.859387 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:41:14.859393 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:41:14.859398 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:41:14.859405 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:41:14.859410 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 13 00:41:14.859416 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 13 00:41:14.859421 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:41:14.859427 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:41:14.859434 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:41:14.859441 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:41:14.859447 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:41:14.859453 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:14.859459 kernel: NX (Execute Disable) protection: active May 13 00:41:14.859464 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 13 00:41:14.859470 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 13 00:41:14.859476 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 13 00:41:14.859482 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 13 00:41:14.859488 kernel: extended physical RAM map: May 13 00:41:14.859493 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 00:41:14.859501 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 13 00:41:14.859507 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 13 00:41:14.859513 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 13 00:41:14.859518 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 13 00:41:14.859524 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 13 00:41:14.859530 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 13 00:41:14.859536 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 13 00:41:14.859542 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 13 00:41:14.859547 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 13 00:41:14.859553 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 13 00:41:14.859559 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 13 00:41:14.859566 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 13 00:41:14.859572 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 13 00:41:14.859578 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 13 00:41:14.859584 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 13 00:41:14.859610 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 13 00:41:14.859631 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 13 00:41:14.859645 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:14.859653 kernel: efi: EFI v2.70 by EDK II May 13 00:41:14.860517 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 13 00:41:14.860525 kernel: random: crng init done May 13 00:41:14.863333 kernel: SMBIOS 2.8 present. May 13 00:41:14.863340 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 13 00:41:14.863347 kernel: Hypervisor detected: KVM May 13 00:41:14.863353 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:41:14.863360 kernel: kvm-clock: cpu 0, msr 47196001, primary cpu clock May 13 00:41:14.863366 kernel: kvm-clock: using sched offset of 3998832536 cycles May 13 00:41:14.863376 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:41:14.863383 kernel: tsc: Detected 2794.748 MHz processor May 13 00:41:14.863389 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:41:14.863396 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:41:14.863403 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 13 00:41:14.863409 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:41:14.863416 kernel: Using GB pages for direct mapping May 13 00:41:14.863422 kernel: Secure boot disabled May 13 00:41:14.863429 kernel: ACPI: Early table checksum verification disabled May 13 00:41:14.863436 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 13 00:41:14.863443 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 13 00:41:14.863450 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:14.863456 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:14.863463 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 13 00:41:14.863469 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:14.863476 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:14.863482 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:14.863489 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:14.863497 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 13 00:41:14.863503 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 13 00:41:14.863510 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 13 00:41:14.863516 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 13 00:41:14.863523 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 13 00:41:14.863529 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 13 00:41:14.863536 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 13 00:41:14.863542 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 13 00:41:14.863549 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 13 00:41:14.863556 kernel: No NUMA configuration found May 13 00:41:14.863563 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 13 00:41:14.863569 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 13 00:41:14.863576 kernel: Zone ranges: May 13 00:41:14.863582 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:41:14.863589 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 13 00:41:14.863595 kernel: Normal empty May 13 00:41:14.863602 kernel: Movable zone start for each node May 13 00:41:14.863608 kernel: Early memory node ranges May 13 00:41:14.863616 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 00:41:14.863622 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 13 00:41:14.863629 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 13 00:41:14.863635 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 13 00:41:14.863641 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 13 00:41:14.863648 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 13 00:41:14.863654 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 13 00:41:14.863661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:14.863667 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 00:41:14.863674 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 13 00:41:14.863682 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:14.863688 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 13 00:41:14.863695 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 13 00:41:14.863701 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 13 00:41:14.863708 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:41:14.863714 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:41:14.863721 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:41:14.863727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:41:14.863734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:41:14.863741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:41:14.863748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:41:14.863754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:41:14.863761 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:41:14.863767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:41:14.863774 kernel: TSC deadline timer available May 13 00:41:14.863780 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:41:14.863787 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:41:14.863793 kernel: kvm-guest: setup PV sched yield May 13 00:41:14.863816 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 13 00:41:14.863823 kernel: Booting paravirtualized kernel on KVM May 13 00:41:14.863835 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:41:14.863843 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:41:14.863850 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:41:14.863857 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:41:14.863864 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:41:14.863870 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:41:14.863877 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 13 00:41:14.863884 kernel: kvm-guest: PV spinlocks enabled May 13 00:41:14.863891 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:41:14.863897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 13 00:41:14.863906 kernel: Policy zone: DMA32 May 13 00:41:14.863914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:14.863921 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:41:14.863928 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:41:14.863936 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:41:14.863943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:41:14.863950 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 169308K reserved, 0K cma-reserved) May 13 00:41:14.863957 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:41:14.863964 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:41:14.863971 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:41:14.863978 kernel: rcu: Hierarchical RCU implementation. May 13 00:41:14.863985 kernel: rcu: RCU event tracing is enabled. May 13 00:41:14.863993 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:41:14.864001 kernel: Rude variant of Tasks RCU enabled. May 13 00:41:14.864007 kernel: Tracing variant of Tasks RCU enabled. May 13 00:41:14.864014 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:41:14.864021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:41:14.864028 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:41:14.864035 kernel: Console: colour dummy device 80x25 May 13 00:41:14.864042 kernel: printk: console [ttyS0] enabled May 13 00:41:14.864049 kernel: ACPI: Core revision 20210730 May 13 00:41:14.864056 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:41:14.864064 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:41:14.864071 kernel: x2apic enabled May 13 00:41:14.864078 kernel: Switched APIC routing to physical x2apic. May 13 00:41:14.864084 kernel: kvm-guest: setup PV IPIs May 13 00:41:14.864091 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:41:14.864098 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:41:14.864108 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 13 00:41:14.864115 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:41:14.864123 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:41:14.864138 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:41:14.864145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:41:14.864152 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:41:14.864159 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:41:14.864950 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:41:14.864958 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:41:14.867771 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:41:14.867779 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:41:14.867786 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:41:14.867796 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:41:14.867812 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:41:14.867820 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:41:14.867827 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:41:14.867834 kernel: Freeing SMP alternatives memory: 32K May 13 00:41:14.867841 kernel: pid_max: default: 32768 minimum: 301 May 13 00:41:14.867847 kernel: LSM: Security Framework initializing May 13 00:41:14.867854 kernel: SELinux: Initializing. May 13 00:41:14.867861 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:14.867870 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:14.867877 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:41:14.867884 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:41:14.867890 kernel: ... version: 0 May 13 00:41:14.867897 kernel: ... bit width: 48 May 13 00:41:14.867904 kernel: ... generic registers: 6 May 13 00:41:14.867911 kernel: ... value mask: 0000ffffffffffff May 13 00:41:14.867918 kernel: ... max period: 00007fffffffffff May 13 00:41:14.867924 kernel: ... fixed-purpose events: 0 May 13 00:41:14.867933 kernel: ... event mask: 000000000000003f May 13 00:41:14.867939 kernel: signal: max sigframe size: 1776 May 13 00:41:14.867946 kernel: rcu: Hierarchical SRCU implementation. May 13 00:41:14.867953 kernel: smp: Bringing up secondary CPUs ... May 13 00:41:14.867960 kernel: x86: Booting SMP configuration: May 13 00:41:14.867967 kernel: .... node #0, CPUs: #1 May 13 00:41:14.867974 kernel: kvm-clock: cpu 1, msr 47196041, secondary cpu clock May 13 00:41:14.867981 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:41:14.867987 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 13 00:41:14.867996 kernel: #2 May 13 00:41:14.868003 kernel: kvm-clock: cpu 2, msr 47196081, secondary cpu clock May 13 00:41:14.868009 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:41:14.868016 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 13 00:41:14.868023 kernel: #3 May 13 00:41:14.868030 kernel: kvm-clock: cpu 3, msr 471960c1, secondary cpu clock May 13 00:41:14.868037 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:41:14.868043 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 13 00:41:14.868050 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:41:14.868057 kernel: smpboot: Max logical packages: 1 May 13 00:41:14.868065 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 13 00:41:14.868072 kernel: devtmpfs: initialized May 13 00:41:14.868078 kernel: x86/mm: Memory block size: 128MB May 13 00:41:14.868085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 13 00:41:14.868092 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 13 00:41:14.868099 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 13 00:41:14.868106 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 13 00:41:14.868113 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 13 00:41:14.868120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:41:14.868137 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:41:14.868144 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:41:14.868151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:41:14.868158 kernel: audit: initializing netlink subsys (disabled) May 13 00:41:14.868165 kernel: audit: type=2000 audit(1747096874.216:1): state=initialized audit_enabled=0 res=1 May 13 00:41:14.868171 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:41:14.868178 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:41:14.868185 kernel: cpuidle: using governor menu May 13 00:41:14.868192 kernel: ACPI: bus type PCI registered May 13 00:41:14.868200 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:41:14.868207 kernel: dca service started, version 1.12.1 May 13 00:41:14.868214 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:41:14.868221 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:41:14.868227 kernel: PCI: Using configuration type 1 for base access May 13 00:41:14.868234 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:41:14.868241 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:41:14.868248 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:41:14.868256 kernel: ACPI: Added _OSI(Module Device) May 13 00:41:14.868263 kernel: ACPI: Added _OSI(Processor Device) May 13 00:41:14.868270 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:41:14.868277 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:41:14.868283 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:41:14.868290 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:41:14.868297 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:41:14.868304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:41:14.868311 kernel: ACPI: Interpreter enabled May 13 00:41:14.868318 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:41:14.868326 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:41:14.868333 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:41:14.868339 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:41:14.868346 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:41:14.868458 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:41:14.868528 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:41:14.868594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:41:14.868606 kernel: PCI host bridge to bus 0000:00 May 13 00:41:14.872409 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:41:14.872476 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:41:14.872536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:41:14.872594 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:41:14.872653 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:41:14.872713 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 13 00:41:14.872775 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:41:14.872867 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:41:14.872945 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:41:14.873015 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 13 00:41:14.873083 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 13 00:41:14.873157 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 13 00:41:14.873228 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 13 00:41:14.874039 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:41:14.874160 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:41:14.874249 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 13 00:41:14.874333 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 13 00:41:14.874402 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 13 00:41:14.874491 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:41:14.874578 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 13 00:41:14.874662 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 13 00:41:14.874746 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 13 00:41:14.874856 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:41:14.874953 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 13 00:41:14.875044 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 13 00:41:14.875144 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 13 00:41:14.875241 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 13 00:41:14.875337 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:41:14.875424 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:41:14.875525 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:41:14.875619 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 13 00:41:14.875701 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 13 00:41:14.875778 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:41:14.875896 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 13 00:41:14.875907 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:41:14.875914 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:41:14.875922 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:41:14.875928 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:41:14.875935 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:41:14.875942 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:41:14.875949 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:41:14.875958 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:41:14.875965 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:41:14.875972 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:41:14.875979 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:41:14.875986 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:41:14.875993 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:41:14.876000 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:41:14.876006 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:41:14.876013 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:41:14.876021 kernel: iommu: Default domain type: Translated May 13 00:41:14.876028 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:41:14.876096 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:41:14.876172 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:41:14.876239 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:41:14.876248 kernel: vgaarb: loaded May 13 00:41:14.876255 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:41:14.876262 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:41:14.876269 kernel: PTP clock support registered May 13 00:41:14.876278 kernel: Registered efivars operations May 13 00:41:14.876285 kernel: PCI: Using ACPI for IRQ routing May 13 00:41:14.876292 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:41:14.876299 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 13 00:41:14.876305 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 13 00:41:14.876312 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 13 00:41:14.876319 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 13 00:41:14.876326 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 13 00:41:14.876333 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 13 00:41:14.876341 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:41:14.876348 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:41:14.876355 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:41:14.876362 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:41:14.876369 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:41:14.876376 kernel: pnp: PnP ACPI init May 13 00:41:14.876447 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:41:14.876459 kernel: pnp: PnP ACPI: found 6 devices May 13 00:41:14.876466 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:41:14.876473 kernel: NET: Registered PF_INET protocol family May 13 00:41:14.876480 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:41:14.876487 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:41:14.876494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:41:14.876501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:41:14.876508 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:41:14.876516 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:41:14.876524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:14.876531 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:14.876538 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:41:14.876545 kernel: NET: Registered PF_XDP protocol family May 13 00:41:14.876613 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 13 00:41:14.876682 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 13 00:41:14.876743 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:41:14.876823 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:41:14.876890 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:41:14.876949 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:41:14.877008 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:41:14.877067 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 13 00:41:14.877076 kernel: PCI: CLS 0 bytes, default 64 May 13 00:41:14.877083 kernel: Initialise system trusted keyrings May 13 00:41:14.877090 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:41:14.877097 kernel: Key type asymmetric registered May 13 00:41:14.877104 kernel: Asymmetric key parser 'x509' registered May 13 00:41:14.877113 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:41:14.877120 kernel: io scheduler mq-deadline registered May 13 00:41:14.877144 kernel: io scheduler kyber registered May 13 00:41:14.877153 kernel: io scheduler bfq registered May 13 00:41:14.877160 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:41:14.877168 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:41:14.877175 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:41:14.877183 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:41:14.877190 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:41:14.877198 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:41:14.877206 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:41:14.877213 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:41:14.877220 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:41:14.877228 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:41:14.877303 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:41:14.877366 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:41:14.877428 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:41:14 UTC (1747096874) May 13 00:41:14.877493 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:41:14.877502 kernel: efifb: probing for efifb May 13 00:41:14.877509 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 13 00:41:14.877517 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 13 00:41:14.877524 kernel: efifb: scrolling: redraw May 13 00:41:14.877531 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 00:41:14.877538 kernel: Console: switching to colour frame buffer device 160x50 May 13 00:41:14.877545 kernel: fb0: EFI VGA frame buffer device May 13 00:41:14.877553 kernel: pstore: Registered efi as persistent store backend May 13 00:41:14.877561 kernel: NET: Registered PF_INET6 protocol family May 13 00:41:14.877569 kernel: Segment Routing with IPv6 May 13 00:41:14.877576 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:41:14.877584 kernel: NET: Registered PF_PACKET protocol family May 13 00:41:14.877592 kernel: Key type dns_resolver registered May 13 00:41:14.877599 kernel: IPI shorthand broadcast: enabled May 13 00:41:14.877608 kernel: sched_clock: Marking stable (463069789, 127978885)->(606026112, -14977438) May 13 00:41:14.877616 kernel: registered taskstats version 1 May 13 00:41:14.877623 kernel: Loading compiled-in X.509 certificates May 13 00:41:14.877630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:41:14.877638 kernel: Key type .fscrypt registered May 13 00:41:14.877644 kernel: Key type fscrypt-provisioning registered May 13 00:41:14.877652 kernel: pstore: Using crash dump compression: deflate May 13 00:41:14.877659 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:41:14.877668 kernel: ima: Allocated hash algorithm: sha1 May 13 00:41:14.877675 kernel: ima: No architecture policies found May 13 00:41:14.877682 kernel: clk: Disabling unused clocks May 13 00:41:14.877689 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:41:14.877697 kernel: Write protecting the kernel read-only data: 28672k May 13 00:41:14.877704 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:41:14.877711 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:41:14.877718 kernel: Run /init as init process May 13 00:41:14.877726 kernel: with arguments: May 13 00:41:14.877734 kernel: /init May 13 00:41:14.877741 kernel: with environment: May 13 00:41:14.877748 kernel: HOME=/ May 13 00:41:14.877755 kernel: TERM=linux May 13 00:41:14.877762 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:41:14.877771 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:14.877781 systemd[1]: Detected virtualization kvm. May 13 00:41:14.877789 systemd[1]: Detected architecture x86-64. May 13 00:41:14.877797 systemd[1]: Running in initrd. May 13 00:41:14.877834 systemd[1]: No hostname configured, using default hostname. May 13 00:41:14.877842 systemd[1]: Hostname set to . May 13 00:41:14.877850 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:14.877857 systemd[1]: Queued start job for default target initrd.target. May 13 00:41:14.877865 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:14.877872 systemd[1]: Reached target cryptsetup.target. May 13 00:41:14.877880 systemd[1]: Reached target paths.target. May 13 00:41:14.877887 systemd[1]: Reached target slices.target. May 13 00:41:14.877897 systemd[1]: Reached target swap.target. May 13 00:41:14.877904 systemd[1]: Reached target timers.target. May 13 00:41:14.877912 systemd[1]: Listening on iscsid.socket. May 13 00:41:14.877920 systemd[1]: Listening on iscsiuio.socket. May 13 00:41:14.877928 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:41:14.877935 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:41:14.877943 systemd[1]: Listening on systemd-journald.socket. May 13 00:41:14.877952 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:14.877959 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:14.877967 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:14.877974 systemd[1]: Reached target sockets.target. May 13 00:41:14.877982 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:14.877990 systemd[1]: Finished network-cleanup.service. May 13 00:41:14.877997 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:41:14.878005 systemd[1]: Starting systemd-journald.service... May 13 00:41:14.878013 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:14.878022 systemd[1]: Starting systemd-resolved.service... May 13 00:41:14.878030 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:41:14.878037 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:14.878045 kernel: audit: type=1130 audit(1747096874.859:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.878053 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:41:14.878060 kernel: audit: type=1130 audit(1747096874.864:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.878068 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:41:14.878076 kernel: audit: type=1130 audit(1747096874.868:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.878085 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:41:14.878095 systemd-journald[196]: Journal started May 13 00:41:14.878137 systemd-journald[196]: Runtime Journal (/run/log/journal/6d9a2259ad4e4064b66c50e17c404336) is 6.0M, max 48.4M, 42.4M free. May 13 00:41:14.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.860511 systemd-modules-load[197]: Inserted module 'overlay' May 13 00:41:14.888545 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:41:14.888572 systemd[1]: Started systemd-journald.service. May 13 00:41:14.888591 kernel: audit: type=1130 audit(1747096874.884:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.884019 systemd-resolved[198]: Positive Trust Anchors: May 13 00:41:14.884026 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:14.884053 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:14.886140 systemd-resolved[198]: Defaulting to hostname 'linux'. May 13 00:41:14.888560 systemd[1]: Started systemd-resolved.service. May 13 00:41:14.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.900642 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:41:14.907151 kernel: audit: type=1130 audit(1747096874.896:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.907185 kernel: audit: type=1130 audit(1747096874.900:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.907198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:41:14.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.903977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:41:14.912935 kernel: audit: type=1130 audit(1747096874.907:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.908244 systemd[1]: Reached target nss-lookup.target. May 13 00:41:14.915147 kernel: Bridge firewalling registered May 13 00:41:14.912955 systemd[1]: Starting dracut-cmdline.service... May 13 00:41:14.914393 systemd-modules-load[197]: Inserted module 'br_netfilter' May 13 00:41:14.924645 dracut-cmdline[216]: dracut-dracut-053 May 13 00:41:14.927084 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:14.933842 kernel: SCSI subsystem initialized May 13 00:41:14.945013 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:41:14.945082 kernel: device-mapper: uevent: version 1.0.3 May 13 00:41:14.945097 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:41:14.949178 systemd-modules-load[197]: Inserted module 'dm_multipath' May 13 00:41:14.949910 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:14.954895 kernel: audit: type=1130 audit(1747096874.949:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.954283 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:14.961242 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:14.965435 kernel: audit: type=1130 audit(1747096874.960:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:14.992830 kernel: Loading iSCSI transport class v2.0-870. May 13 00:41:15.008837 kernel: iscsi: registered transport (tcp) May 13 00:41:15.030842 kernel: iscsi: registered transport (qla4xxx) May 13 00:41:15.030899 kernel: QLogic iSCSI HBA Driver May 13 00:41:15.058984 systemd[1]: Finished dracut-cmdline.service. May 13 00:41:15.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:15.060169 systemd[1]: Starting dracut-pre-udev.service... May 13 00:41:15.104845 kernel: raid6: avx2x4 gen() 29608 MB/s May 13 00:41:15.121832 kernel: raid6: avx2x4 xor() 7228 MB/s May 13 00:41:15.138823 kernel: raid6: avx2x2 gen() 31958 MB/s May 13 00:41:15.155826 kernel: raid6: avx2x2 xor() 19052 MB/s May 13 00:41:15.172827 kernel: raid6: avx2x1 gen() 26402 MB/s May 13 00:41:15.189822 kernel: raid6: avx2x1 xor() 15255 MB/s May 13 00:41:15.206824 kernel: raid6: sse2x4 gen() 14668 MB/s May 13 00:41:15.223823 kernel: raid6: sse2x4 xor() 6973 MB/s May 13 00:41:15.240837 kernel: raid6: sse2x2 gen() 16337 MB/s May 13 00:41:15.257849 kernel: raid6: sse2x2 xor() 9770 MB/s May 13 00:41:15.274829 kernel: raid6: sse2x1 gen() 12218 MB/s May 13 00:41:15.292216 kernel: raid6: sse2x1 xor() 7802 MB/s May 13 00:41:15.292235 kernel: raid6: using algorithm avx2x2 gen() 31958 MB/s May 13 00:41:15.292244 kernel: raid6: .... xor() 19052 MB/s, rmw enabled May 13 00:41:15.292925 kernel: raid6: using avx2x2 recovery algorithm May 13 00:41:15.304835 kernel: xor: automatically using best checksumming function avx May 13 00:41:15.392836 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:41:15.400511 systemd[1]: Finished dracut-pre-udev.service. May 13 00:41:15.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:15.401000 audit: BPF prog-id=7 op=LOAD May 13 00:41:15.401000 audit: BPF prog-id=8 op=LOAD May 13 00:41:15.402953 systemd[1]: Starting systemd-udevd.service... May 13 00:41:15.414187 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 13 00:41:15.417920 systemd[1]: Started systemd-udevd.service. May 13 00:41:15.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:15.419958 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:41:15.429492 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation May 13 00:41:15.452839 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:41:15.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:15.455124 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:15.485009 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:15.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:15.516836 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:41:15.531201 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:41:15.531240 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:41:15.545269 kernel: AES CTR mode by8 optimization enabled May 13 00:41:15.545282 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:41:15.545291 kernel: GPT:9289727 != 19775487 May 13 00:41:15.545303 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:41:15.545312 kernel: GPT:9289727 != 19775487 May 13 00:41:15.545320 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:41:15.545328 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:15.545337 kernel: libata version 3.00 loaded. May 13 00:41:15.557816 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) May 13 00:41:15.557839 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:41:15.581348 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:41:15.581362 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:41:15.581448 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:41:15.581521 kernel: scsi host0: ahci May 13 00:41:15.581610 kernel: scsi host1: ahci May 13 00:41:15.581689 kernel: scsi host2: ahci May 13 00:41:15.581768 kernel: scsi host3: ahci May 13 00:41:15.581864 kernel: scsi host4: ahci May 13 00:41:15.581955 kernel: scsi host5: ahci May 13 00:41:15.582036 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 13 00:41:15.582046 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 13 00:41:15.582055 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 13 00:41:15.582064 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 13 00:41:15.582072 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 13 00:41:15.582081 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 13 00:41:15.557881 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:41:15.559779 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:41:15.568561 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:41:15.577635 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:41:15.584964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:15.588863 systemd[1]: Starting disk-uuid.service... May 13 00:41:15.595338 disk-uuid[536]: Primary Header is updated. May 13 00:41:15.595338 disk-uuid[536]: Secondary Entries is updated. May 13 00:41:15.595338 disk-uuid[536]: Secondary Header is updated. May 13 00:41:15.598700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:15.600816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:15.603822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:15.889836 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:41:15.889911 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:41:15.890830 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:41:15.891837 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:41:15.893238 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:41:15.893263 kernel: ata3.00: applying bridge limits May 13 00:41:15.894828 kernel: ata3.00: configured for UDMA/100 May 13 00:41:15.894840 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:41:15.898824 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:41:15.898851 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:41:15.926824 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:41:15.943453 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:41:15.943466 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:41:16.603924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:16.603982 disk-uuid[538]: The operation has completed successfully. May 13 00:41:16.622824 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:41:16.622901 systemd[1]: Finished disk-uuid.service. May 13 00:41:16.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.634331 systemd[1]: Starting verity-setup.service... May 13 00:41:16.645823 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:41:16.664298 systemd[1]: Found device dev-mapper-usr.device. May 13 00:41:16.666884 systemd[1]: Mounting sysusr-usr.mount... May 13 00:41:16.668744 systemd[1]: Finished verity-setup.service. May 13 00:41:16.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.723826 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:41:16.723834 systemd[1]: Mounted sysusr-usr.mount. May 13 00:41:16.725278 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:41:16.727137 systemd[1]: Starting ignition-setup.service... May 13 00:41:16.728992 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:41:16.735297 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:16.735321 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:16.735330 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:16.742835 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:41:16.752338 systemd[1]: Finished ignition-setup.service. May 13 00:41:16.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.753349 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:41:16.789954 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:41:16.790189 ignition[648]: Ignition 2.14.0 May 13 00:41:16.790198 ignition[648]: Stage: fetch-offline May 13 00:41:16.790241 ignition[648]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:16.790252 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:16.790369 ignition[648]: parsed url from cmdline: "" May 13 00:41:16.790373 ignition[648]: no config URL provided May 13 00:41:16.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.790379 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:41:16.790386 ignition[648]: no config at "/usr/lib/ignition/user.ign" May 13 00:41:16.790404 ignition[648]: op(1): [started] loading QEMU firmware config module May 13 00:41:16.790412 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:41:16.794028 ignition[648]: op(1): [finished] loading QEMU firmware config module May 13 00:41:16.799000 audit: BPF prog-id=9 op=LOAD May 13 00:41:16.800992 systemd[1]: Starting systemd-networkd.service... May 13 00:41:16.838960 ignition[648]: parsing config with SHA512: 4818b343898a06c6414c9e646c97d37f80a491d1e0d6808df13a2c73976e83e296329fecc5d56bae40414eb1643e1e91c8a3bcd8f5b8c1a9e83c3f074be8cc3f May 13 00:41:16.845823 unknown[648]: fetched base config from "system" May 13 00:41:16.846673 unknown[648]: fetched user config from "qemu" May 13 00:41:16.847898 ignition[648]: fetch-offline: fetch-offline passed May 13 00:41:16.847949 ignition[648]: Ignition finished successfully May 13 00:41:16.850473 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:41:16.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.856166 systemd-networkd[719]: lo: Link UP May 13 00:41:16.856175 systemd-networkd[719]: lo: Gained carrier May 13 00:41:16.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.856558 systemd-networkd[719]: Enumeration completed May 13 00:41:16.856684 systemd[1]: Started systemd-networkd.service. May 13 00:41:16.856767 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:16.858293 systemd[1]: Reached target network.target. May 13 00:41:16.859097 systemd-networkd[719]: eth0: Link UP May 13 00:41:16.859100 systemd-networkd[719]: eth0: Gained carrier May 13 00:41:16.861033 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:41:16.868795 systemd[1]: Starting ignition-kargs.service... May 13 00:41:16.871048 systemd[1]: Starting iscsiuio.service... May 13 00:41:16.872221 systemd-networkd[719]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:16.876497 systemd[1]: Started iscsiuio.service. May 13 00:41:16.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.877974 systemd[1]: Starting iscsid.service... May 13 00:41:16.880716 iscsid[731]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:16.880716 iscsid[731]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:41:16.880716 iscsid[731]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:41:16.880716 iscsid[731]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:41:16.880716 iscsid[731]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:16.880716 iscsid[731]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:41:16.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.881836 systemd[1]: Started iscsid.service. May 13 00:41:16.883800 systemd[1]: Starting dracut-initqueue.service... May 13 00:41:16.895485 ignition[721]: Ignition 2.14.0 May 13 00:41:16.895493 ignition[721]: Stage: kargs May 13 00:41:16.895594 ignition[721]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:16.895601 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:16.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.897797 systemd[1]: Finished dracut-initqueue.service. May 13 00:41:16.896485 ignition[721]: kargs: kargs passed May 13 00:41:16.898949 systemd[1]: Reached target remote-fs-pre.target. May 13 00:41:16.896516 ignition[721]: Ignition finished successfully May 13 00:41:16.900726 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:16.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.902400 systemd[1]: Reached target remote-fs.target. May 13 00:41:16.904189 systemd[1]: Starting dracut-pre-mount.service... May 13 00:41:16.905666 systemd[1]: Finished ignition-kargs.service. May 13 00:41:16.907666 systemd[1]: Starting ignition-disks.service... May 13 00:41:16.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.910904 systemd[1]: Finished dracut-pre-mount.service. May 13 00:41:16.913770 ignition[741]: Ignition 2.14.0 May 13 00:41:16.913777 ignition[741]: Stage: disks May 13 00:41:16.913867 ignition[741]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:16.913874 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:16.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.915391 systemd[1]: Finished ignition-disks.service. May 13 00:41:16.914753 ignition[741]: disks: disks passed May 13 00:41:16.916847 systemd[1]: Reached target initrd-root-device.target. May 13 00:41:16.914781 ignition[741]: Ignition finished successfully May 13 00:41:16.918630 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:16.919455 systemd[1]: Reached target local-fs.target. May 13 00:41:16.920012 systemd[1]: Reached target sysinit.target. May 13 00:41:16.920180 systemd[1]: Reached target basic.target. May 13 00:41:16.920954 systemd[1]: Starting systemd-fsck-root.service... May 13 00:41:16.931937 systemd-fsck[754]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:41:16.936826 systemd[1]: Finished systemd-fsck-root.service. May 13 00:41:16.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:16.939800 systemd[1]: Mounting sysroot.mount... May 13 00:41:16.945827 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:41:16.946405 systemd[1]: Mounted sysroot.mount. May 13 00:41:16.947750 systemd[1]: Reached target initrd-root-fs.target. May 13 00:41:16.949418 systemd[1]: Mounting sysroot-usr.mount... May 13 00:41:16.950946 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:41:16.950985 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:41:16.951002 systemd[1]: Reached target ignition-diskful.target. May 13 00:41:16.957815 systemd[1]: Mounted sysroot-usr.mount. May 13 00:41:16.959925 systemd[1]: Starting initrd-setup-root.service... May 13 00:41:16.965496 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:41:16.969507 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory May 13 00:41:16.973252 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:41:16.976537 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:41:17.001557 systemd[1]: Finished initrd-setup-root.service. May 13 00:41:17.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:17.002812 systemd[1]: Starting ignition-mount.service... May 13 00:41:17.004567 systemd[1]: Starting sysroot-boot.service... May 13 00:41:17.007891 bash[805]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:41:17.014708 ignition[806]: INFO : Ignition 2.14.0 May 13 00:41:17.015738 ignition[806]: INFO : Stage: mount May 13 00:41:17.015738 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:17.015738 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:17.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:17.019529 ignition[806]: INFO : mount: mount passed May 13 00:41:17.019529 ignition[806]: INFO : Ignition finished successfully May 13 00:41:17.016903 systemd[1]: Finished ignition-mount.service. May 13 00:41:17.025078 systemd[1]: Finished sysroot-boot.service. May 13 00:41:17.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:17.675152 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:41:17.680820 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 13 00:41:17.683532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:17.683550 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:17.683559 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:17.686693 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:41:17.688993 systemd[1]: Starting ignition-files.service... May 13 00:41:17.702079 ignition[835]: INFO : Ignition 2.14.0 May 13 00:41:17.702079 ignition[835]: INFO : Stage: files May 13 00:41:17.703644 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:17.703644 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:17.706615 ignition[835]: DEBUG : files: compiled without relabeling support, skipping May 13 00:41:17.707883 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:41:17.707883 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:41:17.710744 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:41:17.712128 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:41:17.713818 unknown[835]: wrote ssh authorized keys file for user: core May 13 00:41:17.714864 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:41:17.716446 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:41:17.718305 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 00:41:17.766681 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:41:17.905946 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:41:17.907960 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:41:17.907960 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 00:41:17.963048 systemd-networkd[719]: eth0: Gained IPv6LL May 13 00:41:18.294446 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:41:18.392667 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:41:18.394552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 00:41:18.942256 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:41:19.960730 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:41:19.960730 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:41:19.960730 ignition[835]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:19.988644 ignition[835]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:19.990359 ignition[835]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:41:19.990359 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:19.990359 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:19.990359 ignition[835]: INFO : files: files passed May 13 00:41:19.990359 ignition[835]: INFO : Ignition finished successfully May 13 00:41:20.014848 kernel: kauditd_printk_skb: 25 callbacks suppressed May 13 00:41:20.014870 kernel: audit: type=1130 audit(1747096879.991:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.014881 kernel: audit: type=1130 audit(1747096880.002:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.014891 kernel: audit: type=1130 audit(1747096880.007:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.014903 kernel: audit: type=1131 audit(1747096880.007:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:19.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:19.990045 systemd[1]: Finished ignition-files.service. May 13 00:41:19.992091 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:41:19.997773 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:41:20.019712 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:41:19.998365 systemd[1]: Starting ignition-quench.service... May 13 00:41:20.022107 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:41:20.000333 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:41:20.003063 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:41:20.003125 systemd[1]: Finished ignition-quench.service. May 13 00:41:20.007519 systemd[1]: Reached target ignition-complete.target. May 13 00:41:20.015399 systemd[1]: Starting initrd-parse-etc.service... May 13 00:41:20.035319 kernel: audit: type=1130 audit(1747096880.028:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.035334 kernel: audit: type=1131 audit(1747096880.028:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.026450 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:41:20.026523 systemd[1]: Finished initrd-parse-etc.service. May 13 00:41:20.028229 systemd[1]: Reached target initrd-fs.target. May 13 00:41:20.035325 systemd[1]: Reached target initrd.target. May 13 00:41:20.036128 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:41:20.036708 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:41:20.045953 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:41:20.051041 kernel: audit: type=1130 audit(1747096880.045:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.047382 systemd[1]: Starting initrd-cleanup.service... May 13 00:41:20.055489 systemd[1]: Stopped target nss-lookup.target. May 13 00:41:20.056428 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:41:20.057976 systemd[1]: Stopped target timers.target. May 13 00:41:20.059528 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:41:20.065444 kernel: audit: type=1131 audit(1747096880.060:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.059614 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:41:20.061103 systemd[1]: Stopped target initrd.target. May 13 00:41:20.065521 systemd[1]: Stopped target basic.target. May 13 00:41:20.067062 systemd[1]: Stopped target ignition-complete.target. May 13 00:41:20.068586 systemd[1]: Stopped target ignition-diskful.target. May 13 00:41:20.070139 systemd[1]: Stopped target initrd-root-device.target. May 13 00:41:20.071857 systemd[1]: Stopped target remote-fs.target. May 13 00:41:20.073445 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:41:20.075106 systemd[1]: Stopped target sysinit.target. May 13 00:41:20.076593 systemd[1]: Stopped target local-fs.target. May 13 00:41:20.078140 systemd[1]: Stopped target local-fs-pre.target. May 13 00:41:20.079681 systemd[1]: Stopped target swap.target. May 13 00:41:20.086951 kernel: audit: type=1131 audit(1747096880.081:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.081176 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:41:20.081267 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:41:20.093130 kernel: audit: type=1131 audit(1747096880.087:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.082788 systemd[1]: Stopped target cryptsetup.target. May 13 00:41:20.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.087002 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:41:20.087087 systemd[1]: Stopped dracut-initqueue.service. May 13 00:41:20.088856 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:41:20.088941 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:41:20.093254 systemd[1]: Stopped target paths.target. May 13 00:41:20.094687 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:41:20.099876 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:41:20.101063 systemd[1]: Stopped target slices.target. May 13 00:41:20.102562 systemd[1]: Stopped target sockets.target. May 13 00:41:20.104044 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:41:20.104117 systemd[1]: Closed iscsid.socket. May 13 00:41:20.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.105457 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:41:20.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.105553 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:41:20.107244 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:41:20.107325 systemd[1]: Stopped ignition-files.service. May 13 00:41:20.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.109348 systemd[1]: Stopping ignition-mount.service... May 13 00:41:20.116818 ignition[876]: INFO : Ignition 2.14.0 May 13 00:41:20.116818 ignition[876]: INFO : Stage: umount May 13 00:41:20.116818 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:20.116818 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:20.116818 ignition[876]: INFO : umount: umount passed May 13 00:41:20.116818 ignition[876]: INFO : Ignition finished successfully May 13 00:41:20.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.110488 systemd[1]: Stopping iscsiuio.service... May 13 00:41:20.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.111924 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:41:20.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.112070 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:41:20.114360 systemd[1]: Stopping sysroot-boot.service... May 13 00:41:20.115090 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:41:20.115218 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:41:20.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.116943 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:41:20.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.117062 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:41:20.119991 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:41:20.120068 systemd[1]: Stopped iscsiuio.service. May 13 00:41:20.121236 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:41:20.121300 systemd[1]: Stopped ignition-mount.service. May 13 00:41:20.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.123255 systemd[1]: Stopped target network.target. May 13 00:41:20.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.124604 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:41:20.124632 systemd[1]: Closed iscsiuio.socket. May 13 00:41:20.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.126158 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:41:20.126189 systemd[1]: Stopped ignition-disks.service. May 13 00:41:20.127012 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:41:20.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.127041 systemd[1]: Stopped ignition-kargs.service. May 13 00:41:20.128716 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:41:20.128747 systemd[1]: Stopped ignition-setup.service. May 13 00:41:20.129644 systemd[1]: Stopping systemd-networkd.service... May 13 00:41:20.155000 audit: BPF prog-id=6 op=UNLOAD May 13 00:41:20.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.131476 systemd[1]: Stopping systemd-resolved.service... May 13 00:41:20.133739 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:41:20.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.134178 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:41:20.134249 systemd[1]: Finished initrd-cleanup.service. May 13 00:41:20.135834 systemd-networkd[719]: eth0: DHCPv6 lease lost May 13 00:41:20.164000 audit: BPF prog-id=9 op=UNLOAD May 13 00:41:20.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.136644 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:41:20.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.136718 systemd[1]: Stopped systemd-networkd.service. May 13 00:41:20.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.140539 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:41:20.140564 systemd[1]: Closed systemd-networkd.socket. May 13 00:41:20.142488 systemd[1]: Stopping network-cleanup.service... May 13 00:41:20.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.143712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:41:20.143747 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:41:20.145364 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:41:20.145396 systemd[1]: Stopped systemd-sysctl.service. May 13 00:41:20.147162 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:41:20.147192 systemd[1]: Stopped systemd-modules-load.service. May 13 00:41:20.148983 systemd[1]: Stopping systemd-udevd.service... May 13 00:41:20.151003 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:41:20.151391 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:41:20.151464 systemd[1]: Stopped systemd-resolved.service. May 13 00:41:20.156772 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:41:20.156883 systemd[1]: Stopped network-cleanup.service. May 13 00:41:20.158381 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:41:20.158474 systemd[1]: Stopped systemd-udevd.service. May 13 00:41:20.161098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:41:20.161128 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:41:20.162735 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:41:20.162758 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:41:20.164315 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:41:20.164348 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:41:20.165212 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:41:20.165244 systemd[1]: Stopped dracut-cmdline.service. May 13 00:41:20.166695 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:41:20.166725 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:41:20.167681 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:41:20.169344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:41:20.169385 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:41:20.172453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:41:20.172519 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:41:20.221388 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:41:20.221477 systemd[1]: Stopped sysroot-boot.service. May 13 00:41:20.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.223239 systemd[1]: Reached target initrd-switch-root.target. May 13 00:41:20.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:20.224694 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:41:20.224731 systemd[1]: Stopped initrd-setup-root.service. May 13 00:41:20.225825 systemd[1]: Starting initrd-switch-root.service... May 13 00:41:20.241402 systemd[1]: Switching root. May 13 00:41:20.262525 iscsid[731]: iscsid shutting down. May 13 00:41:20.263292 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). May 13 00:41:20.263337 systemd-journald[196]: Journal stopped May 13 00:41:22.908843 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:41:22.908903 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:41:22.908917 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:41:22.908933 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:41:22.908944 kernel: SELinux: policy capability open_perms=1 May 13 00:41:22.908960 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:41:22.908969 kernel: SELinux: policy capability always_check_network=0 May 13 00:41:22.908980 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:41:22.908992 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:41:22.909001 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:41:22.909018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:41:22.909028 systemd[1]: Successfully loaded SELinux policy in 37.851ms. May 13 00:41:22.909046 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.515ms. May 13 00:41:22.909058 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:22.909068 systemd[1]: Detected virtualization kvm. May 13 00:41:22.909080 systemd[1]: Detected architecture x86-64. May 13 00:41:22.909090 systemd[1]: Detected first boot. May 13 00:41:22.909100 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:22.909110 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:41:22.909119 systemd[1]: Populated /etc with preset unit settings. May 13 00:41:22.909129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:22.909140 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:22.909155 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:22.909168 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:41:22.909178 systemd[1]: Stopped iscsid.service. May 13 00:41:22.909188 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:41:22.909198 systemd[1]: Stopped initrd-switch-root.service. May 13 00:41:22.909208 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:41:22.909219 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:41:22.909231 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:41:22.909241 systemd[1]: Created slice system-getty.slice. May 13 00:41:22.909251 systemd[1]: Created slice system-modprobe.slice. May 13 00:41:22.909261 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:41:22.909272 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:41:22.909283 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:41:22.909292 systemd[1]: Created slice user.slice. May 13 00:41:22.909303 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:22.909315 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:41:22.909325 systemd[1]: Set up automount boot.automount. May 13 00:41:22.909335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:41:22.909346 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:41:22.909357 systemd[1]: Stopped target initrd-fs.target. May 13 00:41:22.909367 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:41:22.909377 systemd[1]: Reached target integritysetup.target. May 13 00:41:22.909388 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:22.909399 systemd[1]: Reached target remote-fs.target. May 13 00:41:22.909412 systemd[1]: Reached target slices.target. May 13 00:41:22.909423 systemd[1]: Reached target swap.target. May 13 00:41:22.909435 systemd[1]: Reached target torcx.target. May 13 00:41:22.909446 systemd[1]: Reached target veritysetup.target. May 13 00:41:22.909456 systemd[1]: Listening on systemd-coredump.socket. May 13 00:41:22.909466 systemd[1]: Listening on systemd-initctl.socket. May 13 00:41:22.909476 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:22.909486 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:22.909497 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:22.909507 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:41:22.909517 systemd[1]: Mounting dev-hugepages.mount... May 13 00:41:22.909527 systemd[1]: Mounting dev-mqueue.mount... May 13 00:41:22.909537 systemd[1]: Mounting media.mount... May 13 00:41:22.909547 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:22.909557 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:41:22.909567 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:41:22.909577 systemd[1]: Mounting tmp.mount... May 13 00:41:22.909588 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:41:22.909598 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:22.909608 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:22.909618 systemd[1]: Starting modprobe@configfs.service... May 13 00:41:22.909628 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:22.909639 systemd[1]: Starting modprobe@drm.service... May 13 00:41:22.909650 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:22.909660 systemd[1]: Starting modprobe@fuse.service... May 13 00:41:22.909670 systemd[1]: Starting modprobe@loop.service... May 13 00:41:22.909682 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:41:22.909693 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:41:22.909703 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:41:22.909713 kernel: fuse: init (API version 7.34) May 13 00:41:22.909723 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:41:22.909733 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:41:22.909742 systemd[1]: Stopped systemd-journald.service. May 13 00:41:22.909752 kernel: loop: module loaded May 13 00:41:22.909762 systemd[1]: Starting systemd-journald.service... May 13 00:41:22.909773 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:22.909783 systemd[1]: Starting systemd-network-generator.service... May 13 00:41:22.909794 systemd[1]: Starting systemd-remount-fs.service... May 13 00:41:22.909815 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:22.909826 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:41:22.909836 systemd[1]: Stopped verity-setup.service. May 13 00:41:22.909847 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:22.909859 systemd-journald[992]: Journal started May 13 00:41:22.909907 systemd-journald[992]: Runtime Journal (/run/log/journal/6d9a2259ad4e4064b66c50e17c404336) is 6.0M, max 48.4M, 42.4M free. May 13 00:41:20.319000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:41:20.715000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:20.715000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:20.715000 audit: BPF prog-id=10 op=LOAD May 13 00:41:20.715000 audit: BPF prog-id=10 op=UNLOAD May 13 00:41:20.715000 audit: BPF prog-id=11 op=LOAD May 13 00:41:20.715000 audit: BPF prog-id=11 op=UNLOAD May 13 00:41:20.741000 audit[910]: AVC avc: denied { associate } for pid=910 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:41:20.741000 audit[910]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=893 pid=910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:20.741000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:20.743000 audit[910]: AVC avc: denied { associate } for pid=910 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:41:20.743000 audit[910]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=893 pid=910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:20.743000 audit: CWD cwd="/" May 13 00:41:20.743000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:20.743000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:20.743000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:22.774000 audit: BPF prog-id=12 op=LOAD May 13 00:41:22.774000 audit: BPF prog-id=3 op=UNLOAD May 13 00:41:22.774000 audit: BPF prog-id=13 op=LOAD May 13 00:41:22.774000 audit: BPF prog-id=14 op=LOAD May 13 00:41:22.774000 audit: BPF prog-id=4 op=UNLOAD May 13 00:41:22.774000 audit: BPF prog-id=5 op=UNLOAD May 13 00:41:22.775000 audit: BPF prog-id=15 op=LOAD May 13 00:41:22.775000 audit: BPF prog-id=12 op=UNLOAD May 13 00:41:22.775000 audit: BPF prog-id=16 op=LOAD May 13 00:41:22.775000 audit: BPF prog-id=17 op=LOAD May 13 00:41:22.775000 audit: BPF prog-id=13 op=UNLOAD May 13 00:41:22.775000 audit: BPF prog-id=14 op=UNLOAD May 13 00:41:22.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.792000 audit: BPF prog-id=15 op=UNLOAD May 13 00:41:22.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.888000 audit: BPF prog-id=18 op=LOAD May 13 00:41:22.888000 audit: BPF prog-id=19 op=LOAD May 13 00:41:22.888000 audit: BPF prog-id=20 op=LOAD May 13 00:41:22.888000 audit: BPF prog-id=16 op=UNLOAD May 13 00:41:22.888000 audit: BPF prog-id=17 op=UNLOAD May 13 00:41:22.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.906000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:41:22.906000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff92fbcd40 a2=4000 a3=7fff92fbcddc items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:22.906000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:41:20.740847 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:22.773558 systemd[1]: Queued start job for default target multi-user.target. May 13 00:41:20.741054 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:41:22.773570 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:41:22.911926 systemd[1]: Started systemd-journald.service. May 13 00:41:20.741069 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:41:22.777299 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:41:20.741095 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:41:20.741104 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:41:20.741130 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:41:22.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.912296 systemd[1]: Mounted dev-hugepages.mount. May 13 00:41:20.741141 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:41:20.741320 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:41:22.913253 systemd[1]: Mounted dev-mqueue.mount. May 13 00:41:20.741348 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:41:22.914089 systemd[1]: Mounted media.mount. May 13 00:41:20.741359 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:41:22.914846 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:41:20.741969 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:41:22.915729 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:41:20.742000 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:41:22.916649 systemd[1]: Mounted tmp.mount. May 13 00:41:20.742031 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:41:20.742044 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:41:20.742060 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:41:20.742073 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:41:22.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.917762 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:41:22.517984 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:22.518229 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:22.518325 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:22.518493 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:41:22.518549 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:41:22.518608 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-13T00:41:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:41:22.919129 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:22.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.920261 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:41:22.920457 systemd[1]: Finished modprobe@configfs.service. May 13 00:41:22.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.921673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:22.921891 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:22.923050 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:22.923221 systemd[1]: Finished modprobe@drm.service. May 13 00:41:22.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.924300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:22.924438 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:22.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.925613 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:41:22.925766 systemd[1]: Finished modprobe@fuse.service. May 13 00:41:22.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.926952 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:22.927118 systemd[1]: Finished modprobe@loop.service. May 13 00:41:22.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.928349 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:22.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.929556 systemd[1]: Finished systemd-network-generator.service. May 13 00:41:22.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.930721 systemd[1]: Finished systemd-remount-fs.service. May 13 00:41:22.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.932044 systemd[1]: Reached target network-pre.target. May 13 00:41:22.934179 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:41:22.935977 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:41:22.936730 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:41:22.938073 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:41:22.939770 systemd[1]: Starting systemd-journal-flush.service... May 13 00:41:22.940837 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:22.941871 systemd[1]: Starting systemd-random-seed.service... May 13 00:41:22.948661 systemd-journald[992]: Time spent on flushing to /var/log/journal/6d9a2259ad4e4064b66c50e17c404336 is 13.818ms for 1163 entries. May 13 00:41:22.948661 systemd-journald[992]: System Journal (/var/log/journal/6d9a2259ad4e4064b66c50e17c404336) is 8.0M, max 195.6M, 187.6M free. May 13 00:41:22.979267 systemd-journald[992]: Received client request to flush runtime journal. May 13 00:41:22.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:22.942852 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:22.944305 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:22.946665 systemd[1]: Starting systemd-sysusers.service... May 13 00:41:22.950139 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:41:22.982261 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:41:22.951118 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:41:22.952822 systemd[1]: Finished systemd-random-seed.service. May 13 00:41:22.954291 systemd[1]: Reached target first-boot-complete.target. May 13 00:41:22.961645 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:22.963260 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:22.965069 systemd[1]: Starting systemd-udev-settle.service... May 13 00:41:22.969762 systemd[1]: Finished systemd-sysusers.service. May 13 00:41:22.980254 systemd[1]: Finished systemd-journal-flush.service. May 13 00:41:23.446874 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:41:23.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.447000 audit: BPF prog-id=21 op=LOAD May 13 00:41:23.447000 audit: BPF prog-id=22 op=LOAD May 13 00:41:23.447000 audit: BPF prog-id=7 op=UNLOAD May 13 00:41:23.447000 audit: BPF prog-id=8 op=UNLOAD May 13 00:41:23.448950 systemd[1]: Starting systemd-udevd.service... May 13 00:41:23.464424 systemd-udevd[1017]: Using default interface naming scheme 'v252'. May 13 00:41:23.475448 systemd[1]: Started systemd-udevd.service. May 13 00:41:23.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.476000 audit: BPF prog-id=23 op=LOAD May 13 00:41:23.478061 systemd[1]: Starting systemd-networkd.service... May 13 00:41:23.482000 audit: BPF prog-id=24 op=LOAD May 13 00:41:23.482000 audit: BPF prog-id=25 op=LOAD May 13 00:41:23.482000 audit: BPF prog-id=26 op=LOAD May 13 00:41:23.484305 systemd[1]: Starting systemd-userdbd.service... May 13 00:41:23.512357 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 00:41:23.514997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:23.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.516454 systemd[1]: Started systemd-userdbd.service. May 13 00:41:23.544836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:41:23.548824 kernel: ACPI: button: Power Button [PWRF] May 13 00:41:23.554400 systemd-networkd[1025]: lo: Link UP May 13 00:41:23.554629 systemd-networkd[1025]: lo: Gained carrier May 13 00:41:23.555065 systemd-networkd[1025]: Enumeration completed May 13 00:41:23.555198 systemd[1]: Started systemd-networkd.service. May 13 00:41:23.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.556154 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:23.556994 systemd-networkd[1025]: eth0: Link UP May 13 00:41:23.557061 systemd-networkd[1025]: eth0: Gained carrier May 13 00:41:23.564000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:41:23.570936 systemd-networkd[1025]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:23.564000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556949923aa0 a1=338ac a2=7fb6661b2bc5 a3=5 items=110 ppid=1017 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:23.564000 audit: CWD cwd="/" May 13 00:41:23.564000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=1 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=2 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=3 name=(null) inode=14640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=4 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=5 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=6 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=7 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=8 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=9 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=10 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=11 name=(null) inode=14644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=12 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=13 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=14 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=15 name=(null) inode=14646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=16 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=17 name=(null) inode=14647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=18 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=19 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=20 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=21 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=22 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=23 name=(null) inode=14650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=24 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=25 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=26 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=27 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=28 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=29 name=(null) inode=14653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=30 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=31 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=32 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=33 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=34 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=35 name=(null) inode=14656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=36 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=37 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=38 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=39 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=40 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=41 name=(null) inode=14659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=42 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=43 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=44 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=45 name=(null) inode=14661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=46 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=47 name=(null) inode=14662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=48 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=49 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=50 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=51 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=52 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=53 name=(null) inode=14665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=55 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=56 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=57 name=(null) inode=14667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=58 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=59 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=60 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=61 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=62 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=63 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=64 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=65 name=(null) inode=14671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=66 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=67 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=68 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=69 name=(null) inode=14673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=70 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=71 name=(null) inode=14674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=72 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=73 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=74 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=75 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=76 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=77 name=(null) inode=14677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=78 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=79 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=80 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=81 name=(null) inode=14679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=82 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=83 name=(null) inode=14680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=84 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=85 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=86 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=87 name=(null) inode=14682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=88 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=89 name=(null) inode=14683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=90 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=91 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=92 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=93 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=94 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=95 name=(null) inode=14686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=96 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=97 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=98 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=99 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=100 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=101 name=(null) inode=14689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=102 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=103 name=(null) inode=14690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=104 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=105 name=(null) inode=14691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=106 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=107 name=(null) inode=14692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PATH item=109 name=(null) inode=15534 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:23.564000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:41:23.598713 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 13 00:41:23.604034 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:41:23.604195 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:41:23.604216 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:41:23.604325 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:41:23.606838 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:41:23.639060 kernel: kvm: Nested Virtualization enabled May 13 00:41:23.639153 kernel: SVM: kvm: Nested Paging enabled May 13 00:41:23.639173 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:41:23.640238 kernel: SVM: Virtual GIF supported May 13 00:41:23.654838 kernel: EDAC MC: Ver: 3.0.0 May 13 00:41:23.681246 systemd[1]: Finished systemd-udev-settle.service. May 13 00:41:23.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.683506 systemd[1]: Starting lvm2-activation-early.service... May 13 00:41:23.690464 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:23.716735 systemd[1]: Finished lvm2-activation-early.service. May 13 00:41:23.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.717945 systemd[1]: Reached target cryptsetup.target. May 13 00:41:23.719957 systemd[1]: Starting lvm2-activation.service... May 13 00:41:23.723451 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:23.748022 systemd[1]: Finished lvm2-activation.service. May 13 00:41:23.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.749199 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:23.750212 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:41:23.750241 systemd[1]: Reached target local-fs.target. May 13 00:41:23.751063 systemd[1]: Reached target machines.target. May 13 00:41:23.752918 systemd[1]: Starting ldconfig.service... May 13 00:41:23.753917 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:23.753958 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:23.754654 systemd[1]: Starting systemd-boot-update.service... May 13 00:41:23.756389 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:41:23.758393 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:41:23.760415 systemd[1]: Starting systemd-sysext.service... May 13 00:41:23.761376 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1057 (bootctl) May 13 00:41:23.762239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:41:23.766172 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:41:23.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.768934 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:41:23.772250 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:41:23.772409 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:41:23.782904 kernel: loop0: detected capacity change from 0 to 218376 May 13 00:41:23.797936 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) May 13 00:41:23.797936 systemd-fsck[1065]: /dev/vda1: 791 files, 120712/258078 clusters May 13 00:41:23.799282 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:41:23.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:23.802136 systemd[1]: Mounting boot.mount... May 13 00:41:23.815306 systemd[1]: Mounted boot.mount. May 13 00:41:24.047013 systemd[1]: Finished systemd-boot-update.service. May 13 00:41:24.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.050891 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:41:24.051387 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:41:24.052838 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:41:24.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.070854 kernel: loop1: detected capacity change from 0 to 218376 May 13 00:41:24.074724 (sd-sysext)[1070]: Using extensions 'kubernetes'. May 13 00:41:24.075083 (sd-sysext)[1070]: Merged extensions into '/usr'. May 13 00:41:24.089453 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.090746 systemd[1]: Mounting usr-share-oem.mount... May 13 00:41:24.091727 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:24.092651 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:24.094311 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:24.096313 systemd[1]: Starting modprobe@loop.service... May 13 00:41:24.097193 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:24.097316 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:24.097417 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.099912 systemd[1]: Mounted usr-share-oem.mount. May 13 00:41:24.101323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:24.101470 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:24.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.102928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:24.103027 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:24.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.104662 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:24.104834 systemd[1]: Finished modprobe@loop.service. May 13 00:41:24.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.106356 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:24.106474 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:24.107566 systemd[1]: Finished systemd-sysext.service. May 13 00:41:24.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.109836 systemd[1]: Starting ensure-sysext.service... May 13 00:41:24.111655 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:41:24.112538 ldconfig[1056]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:41:24.116540 systemd[1]: Reloading. May 13 00:41:24.122757 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:41:24.124591 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:41:24.127087 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:41:24.157560 /usr/lib/systemd/system-generators/torcx-generator[1096]: time="2025-05-13T00:41:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:24.157586 /usr/lib/systemd/system-generators/torcx-generator[1096]: time="2025-05-13T00:41:24Z" level=info msg="torcx already run" May 13 00:41:24.232849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:24.232865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:24.249372 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:24.298000 audit: BPF prog-id=27 op=LOAD May 13 00:41:24.298000 audit: BPF prog-id=28 op=LOAD May 13 00:41:24.299000 audit: BPF prog-id=21 op=UNLOAD May 13 00:41:24.299000 audit: BPF prog-id=22 op=UNLOAD May 13 00:41:24.299000 audit: BPF prog-id=29 op=LOAD May 13 00:41:24.299000 audit: BPF prog-id=23 op=UNLOAD May 13 00:41:24.301000 audit: BPF prog-id=30 op=LOAD May 13 00:41:24.301000 audit: BPF prog-id=18 op=UNLOAD May 13 00:41:24.301000 audit: BPF prog-id=31 op=LOAD May 13 00:41:24.302000 audit: BPF prog-id=32 op=LOAD May 13 00:41:24.302000 audit: BPF prog-id=19 op=UNLOAD May 13 00:41:24.302000 audit: BPF prog-id=20 op=UNLOAD May 13 00:41:24.302000 audit: BPF prog-id=33 op=LOAD May 13 00:41:24.302000 audit: BPF prog-id=24 op=UNLOAD May 13 00:41:24.302000 audit: BPF prog-id=34 op=LOAD May 13 00:41:24.302000 audit: BPF prog-id=35 op=LOAD May 13 00:41:24.302000 audit: BPF prog-id=25 op=UNLOAD May 13 00:41:24.302000 audit: BPF prog-id=26 op=UNLOAD May 13 00:41:24.305517 systemd[1]: Finished ldconfig.service. May 13 00:41:24.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.307309 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:41:24.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.310778 systemd[1]: Starting audit-rules.service... May 13 00:41:24.312509 systemd[1]: Starting clean-ca-certificates.service... May 13 00:41:24.314192 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:41:24.315000 audit: BPF prog-id=36 op=LOAD May 13 00:41:24.317000 audit: BPF prog-id=37 op=LOAD May 13 00:41:24.316567 systemd[1]: Starting systemd-resolved.service... May 13 00:41:24.318469 systemd[1]: Starting systemd-timesyncd.service... May 13 00:41:24.320089 systemd[1]: Starting systemd-update-utmp.service... May 13 00:41:24.321848 systemd[1]: Finished clean-ca-certificates.service. May 13 00:41:24.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.323000 audit[1150]: SYSTEM_BOOT pid=1150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:41:24.324529 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:24.327472 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.327687 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:24.329215 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:24.331005 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:24.332827 systemd[1]: Starting modprobe@loop.service... May 13 00:41:24.333606 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:24.333754 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:24.334095 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:24.334219 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.335538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:24.335645 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:24.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.337002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:24.337100 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:24.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.338451 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:24.338543 systemd[1]: Finished modprobe@loop.service. May 13 00:41:24.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.339791 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:24.340000 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:24.341038 systemd[1]: Finished systemd-update-utmp.service. May 13 00:41:24.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.342313 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:41:24.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:24.344698 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.344889 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:24.345000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:41:24.345000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffda3578660 a2=420 a3=0 items=0 ppid=1139 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:24.345000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:41:24.346253 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:24.349155 augenrules[1162]: No rules May 13 00:41:24.348089 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:24.352092 systemd[1]: Starting modprobe@loop.service... May 13 00:41:24.352968 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:24.353089 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:24.354160 systemd[1]: Starting systemd-update-done.service... May 13 00:41:24.354982 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:24.355071 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.356006 systemd[1]: Finished audit-rules.service. May 13 00:41:24.357119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:24.357238 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:24.358353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:24.358454 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:24.359572 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:24.359656 systemd[1]: Finished modprobe@loop.service. May 13 00:41:24.360697 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:24.360776 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:24.363131 systemd[1]: Finished systemd-update-done.service. May 13 00:41:24.364341 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.364527 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:24.365731 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:24.367550 systemd[1]: Starting modprobe@drm.service... May 13 00:41:24.369201 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:24.371127 systemd[1]: Starting modprobe@loop.service... May 13 00:41:24.372055 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:24.372185 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:24.373381 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:41:24.374375 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:24.374464 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:24.375383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:24.375487 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:24.376677 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:24.376775 systemd[1]: Finished modprobe@drm.service. May 13 00:41:24.377908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:24.378022 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:24.379367 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:24.379507 systemd[1]: Finished modprobe@loop.service. May 13 00:41:24.382899 systemd[1]: Finished ensure-sysext.service. May 13 00:41:24.383858 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:24.383895 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:24.385781 systemd[1]: Started systemd-timesyncd.service. May 13 00:41:24.386712 systemd[1]: Reached target time-set.target. May 13 00:41:24.390951 systemd-timesyncd[1146]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:41:24.391067 systemd-timesyncd[1146]: Initial clock synchronization to Tue 2025-05-13 00:41:24.593489 UTC. May 13 00:41:24.391292 systemd-resolved[1145]: Positive Trust Anchors: May 13 00:41:24.391306 systemd-resolved[1145]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:24.391333 systemd-resolved[1145]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:24.399567 systemd-resolved[1145]: Defaulting to hostname 'linux'. May 13 00:41:24.400903 systemd[1]: Started systemd-resolved.service. May 13 00:41:24.401840 systemd[1]: Reached target network.target. May 13 00:41:24.402614 systemd[1]: Reached target nss-lookup.target. May 13 00:41:24.403428 systemd[1]: Reached target sysinit.target. May 13 00:41:24.404270 systemd[1]: Started motdgen.path. May 13 00:41:24.404972 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:41:24.406200 systemd[1]: Started logrotate.timer. May 13 00:41:24.407006 systemd[1]: Started mdadm.timer. May 13 00:41:24.407690 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:41:24.408535 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:41:24.408560 systemd[1]: Reached target paths.target. May 13 00:41:24.409298 systemd[1]: Reached target timers.target. May 13 00:41:24.410322 systemd[1]: Listening on dbus.socket. May 13 00:41:24.412006 systemd[1]: Starting docker.socket... May 13 00:41:24.414756 systemd[1]: Listening on sshd.socket. May 13 00:41:24.415600 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:24.415946 systemd[1]: Listening on docker.socket. May 13 00:41:24.416765 systemd[1]: Reached target sockets.target. May 13 00:41:24.417575 systemd[1]: Reached target basic.target. May 13 00:41:24.418370 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:24.418395 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:24.419216 systemd[1]: Starting containerd.service... May 13 00:41:24.420785 systemd[1]: Starting dbus.service... May 13 00:41:24.422341 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:41:24.424098 systemd[1]: Starting extend-filesystems.service... May 13 00:41:24.425031 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:41:24.425898 systemd[1]: Starting motdgen.service... May 13 00:41:24.427528 systemd[1]: Starting prepare-helm.service... May 13 00:41:24.429091 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:41:24.431551 jq[1181]: false May 13 00:41:24.433011 systemd[1]: Starting sshd-keygen.service... May 13 00:41:24.436026 systemd[1]: Starting systemd-logind.service... May 13 00:41:24.436874 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:24.436930 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:41:24.437270 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:41:24.437792 systemd[1]: Starting update-engine.service... May 13 00:41:24.439309 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:41:24.441416 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:41:24.441553 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:41:24.442368 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:41:24.442510 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:41:24.445611 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:41:24.445750 systemd[1]: Finished motdgen.service. May 13 00:41:24.451640 jq[1197]: true May 13 00:41:24.452318 tar[1199]: linux-amd64/LICENSE May 13 00:41:24.452495 tar[1199]: linux-amd64/helm May 13 00:41:24.452726 extend-filesystems[1182]: Found loop1 May 13 00:41:24.453725 extend-filesystems[1182]: Found sr0 May 13 00:41:24.453725 extend-filesystems[1182]: Found vda May 13 00:41:24.453725 extend-filesystems[1182]: Found vda1 May 13 00:41:24.457500 extend-filesystems[1182]: Found vda2 May 13 00:41:24.457500 extend-filesystems[1182]: Found vda3 May 13 00:41:24.457500 extend-filesystems[1182]: Found usr May 13 00:41:24.457500 extend-filesystems[1182]: Found vda4 May 13 00:41:24.457500 extend-filesystems[1182]: Found vda6 May 13 00:41:24.457500 extend-filesystems[1182]: Found vda7 May 13 00:41:24.457500 extend-filesystems[1182]: Found vda9 May 13 00:41:24.457500 extend-filesystems[1182]: Checking size of /dev/vda9 May 13 00:41:24.479126 jq[1203]: true May 13 00:41:24.462686 systemd[1]: Started dbus.service. May 13 00:41:24.462517 dbus-daemon[1180]: [system] SELinux support is enabled May 13 00:41:24.477733 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:41:24.477762 systemd[1]: Reached target system-config.target. May 13 00:41:24.478714 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:41:24.478726 systemd[1]: Reached target user-config.target. May 13 00:41:24.498490 extend-filesystems[1182]: Resized partition /dev/vda9 May 13 00:41:24.505068 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:41:24.514916 update_engine[1196]: I0513 00:41:24.514436 1196 main.cc:92] Flatcar Update Engine starting May 13 00:41:24.516959 update_engine[1196]: I0513 00:41:24.516749 1196 update_check_scheduler.cc:74] Next update check in 3m45s May 13 00:41:24.519595 systemd[1]: Started update-engine.service. May 13 00:41:24.522553 systemd-logind[1195]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:41:24.522927 systemd-logind[1195]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:41:24.523279 systemd[1]: Started locksmithd.service. May 13 00:41:24.524463 systemd-logind[1195]: New seat seat0. May 13 00:41:24.525880 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:41:24.541678 systemd[1]: Started systemd-logind.service. May 13 00:41:24.568337 env[1200]: time="2025-05-13T00:41:24.568207509Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:41:24.576844 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:41:24.604126 env[1200]: time="2025-05-13T00:41:24.604073814Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:41:24.625377 locksmithd[1232]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:41:24.704966 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:41:24.704966 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:41:24.704966 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.704844872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707660694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707683506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707889242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707903309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707913818Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707922194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.707980734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.708157866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:24.712907 env[1200]: time="2025-05-13T00:41:24.708250801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:24.713103 bash[1228]: Updated "/home/core/.ssh/authorized_keys" May 13 00:41:24.713178 extend-filesystems[1182]: Resized filesystem in /dev/vda9 May 13 00:41:24.705550 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:41:24.714159 env[1200]: time="2025-05-13T00:41:24.708262082Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:41:24.714159 env[1200]: time="2025-05-13T00:41:24.708297909Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:41:24.714159 env[1200]: time="2025-05-13T00:41:24.708307938Z" level=info msg="metadata content store policy set" policy=shared May 13 00:41:24.705686 systemd[1]: Finished extend-filesystems.service. May 13 00:41:24.711033 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:41:24.717436 env[1200]: time="2025-05-13T00:41:24.717398719Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:41:24.717482 env[1200]: time="2025-05-13T00:41:24.717438474Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:41:24.717482 env[1200]: time="2025-05-13T00:41:24.717451879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:41:24.717526 env[1200]: time="2025-05-13T00:41:24.717480843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717526 env[1200]: time="2025-05-13T00:41:24.717499699Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717526 env[1200]: time="2025-05-13T00:41:24.717512202Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717526 env[1200]: time="2025-05-13T00:41:24.717525227Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717601 env[1200]: time="2025-05-13T00:41:24.717538572Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717601 env[1200]: time="2025-05-13T00:41:24.717552177Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717601 env[1200]: time="2025-05-13T00:41:24.717564220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717601 env[1200]: time="2025-05-13T00:41:24.717575551Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:41:24.717601 env[1200]: time="2025-05-13T00:41:24.717587233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:41:24.717715 env[1200]: time="2025-05-13T00:41:24.717691128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:41:24.717783 env[1200]: time="2025-05-13T00:41:24.717761320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:41:24.718042 env[1200]: time="2025-05-13T00:41:24.718018141Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:41:24.718074 env[1200]: time="2025-05-13T00:41:24.718046214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718074 env[1200]: time="2025-05-13T00:41:24.718058697Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:41:24.718114 env[1200]: time="2025-05-13T00:41:24.718098011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718114 env[1200]: time="2025-05-13T00:41:24.718110334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718154 env[1200]: time="2025-05-13T00:41:24.718121796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718154 env[1200]: time="2025-05-13T00:41:24.718132356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718154 env[1200]: time="2025-05-13T00:41:24.718142865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718154 env[1200]: time="2025-05-13T00:41:24.718154196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718229 env[1200]: time="2025-05-13T00:41:24.718166329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718229 env[1200]: time="2025-05-13T00:41:24.718176719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718229 env[1200]: time="2025-05-13T00:41:24.718188761Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:41:24.718295 env[1200]: time="2025-05-13T00:41:24.718284401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718317 env[1200]: time="2025-05-13T00:41:24.718298547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718317 env[1200]: time="2025-05-13T00:41:24.718310159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:41:24.718354 env[1200]: time="2025-05-13T00:41:24.718323915Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:41:24.718354 env[1200]: time="2025-05-13T00:41:24.718338753Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:41:24.718354 env[1200]: time="2025-05-13T00:41:24.718348631Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:41:24.718415 env[1200]: time="2025-05-13T00:41:24.718366264Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:41:24.718415 env[1200]: time="2025-05-13T00:41:24.718398254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:41:24.720235 env[1200]: time="2025-05-13T00:41:24.718725809Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:41:24.721222 env[1200]: time="2025-05-13T00:41:24.720576700Z" level=info msg="Connect containerd service" May 13 00:41:24.721222 env[1200]: time="2025-05-13T00:41:24.720922118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:41:24.721841 env[1200]: time="2025-05-13T00:41:24.721797210Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:41:24.722063 env[1200]: time="2025-05-13T00:41:24.722044033Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:41:24.722107 env[1200]: time="2025-05-13T00:41:24.722077926Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:41:24.722174 systemd[1]: Started containerd.service. May 13 00:41:24.722640 env[1200]: time="2025-05-13T00:41:24.722616847Z" level=info msg="containerd successfully booted in 0.162932s" May 13 00:41:24.725464 env[1200]: time="2025-05-13T00:41:24.725417951Z" level=info msg="Start subscribing containerd event" May 13 00:41:24.725515 env[1200]: time="2025-05-13T00:41:24.725486971Z" level=info msg="Start recovering state" May 13 00:41:24.725559 env[1200]: time="2025-05-13T00:41:24.725534219Z" level=info msg="Start event monitor" May 13 00:41:24.725559 env[1200]: time="2025-05-13T00:41:24.725557874Z" level=info msg="Start snapshots syncer" May 13 00:41:24.725624 env[1200]: time="2025-05-13T00:41:24.725565338Z" level=info msg="Start cni network conf syncer for default" May 13 00:41:24.725624 env[1200]: time="2025-05-13T00:41:24.725572401Z" level=info msg="Start streaming server" May 13 00:41:25.117552 tar[1199]: linux-amd64/README.md May 13 00:41:25.121265 systemd[1]: Finished prepare-helm.service. May 13 00:41:25.297787 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:41:25.316465 systemd[1]: Finished sshd-keygen.service. May 13 00:41:25.318869 systemd[1]: Starting issuegen.service... May 13 00:41:25.323566 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:41:25.323689 systemd[1]: Finished issuegen.service. May 13 00:41:25.325765 systemd[1]: Starting systemd-user-sessions.service... May 13 00:41:25.331554 systemd[1]: Finished systemd-user-sessions.service. May 13 00:41:25.333668 systemd[1]: Started getty@tty1.service. May 13 00:41:25.335563 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:41:25.336706 systemd[1]: Reached target getty.target. May 13 00:41:25.452957 systemd-networkd[1025]: eth0: Gained IPv6LL May 13 00:41:25.454854 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:41:25.456248 systemd[1]: Reached target network-online.target. May 13 00:41:25.458504 systemd[1]: Starting kubelet.service... May 13 00:41:26.313486 systemd[1]: Started kubelet.service. May 13 00:41:26.314876 systemd[1]: Reached target multi-user.target. May 13 00:41:26.316972 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:41:26.323342 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:41:26.323540 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:41:26.324879 systemd[1]: Startup finished in 665ms (kernel) + 5.556s (initrd) + 6.044s (userspace) = 12.267s. May 13 00:41:26.821993 kubelet[1262]: E0513 00:41:26.821937 1262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:26.823451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:26.823562 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:26.823785 systemd[1]: kubelet.service: Consumed 1.215s CPU time. May 13 00:41:27.086595 systemd[1]: Created slice system-sshd.slice. May 13 00:41:27.087648 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:54330.service. May 13 00:41:27.118173 sshd[1271]: Accepted publickey for core from 10.0.0.1 port 54330 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:27.119237 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:27.127904 systemd-logind[1195]: New session 1 of user core. May 13 00:41:27.128649 systemd[1]: Created slice user-500.slice. May 13 00:41:27.129533 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:41:27.137844 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:41:27.139408 systemd[1]: Starting user@500.service... May 13 00:41:27.141745 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:27.210526 systemd[1274]: Queued start job for default target default.target. May 13 00:41:27.210938 systemd[1274]: Reached target paths.target. May 13 00:41:27.210955 systemd[1274]: Reached target sockets.target. May 13 00:41:27.210967 systemd[1274]: Reached target timers.target. May 13 00:41:27.210977 systemd[1274]: Reached target basic.target. May 13 00:41:27.211011 systemd[1274]: Reached target default.target. May 13 00:41:27.211032 systemd[1274]: Startup finished in 61ms. May 13 00:41:27.211174 systemd[1]: Started user@500.service. May 13 00:41:27.212467 systemd[1]: Started session-1.scope. May 13 00:41:27.264044 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:54340.service. May 13 00:41:27.292276 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 54340 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:27.293554 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:27.296864 systemd-logind[1195]: New session 2 of user core. May 13 00:41:27.297915 systemd[1]: Started session-2.scope. May 13 00:41:27.350722 sshd[1283]: pam_unix(sshd:session): session closed for user core May 13 00:41:27.353617 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:54340.service: Deactivated successfully. May 13 00:41:27.354142 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:41:27.354590 systemd-logind[1195]: Session 2 logged out. Waiting for processes to exit. May 13 00:41:27.355612 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:54342.service. May 13 00:41:27.356209 systemd-logind[1195]: Removed session 2. May 13 00:41:27.383118 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 54342 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:27.383898 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:27.386605 systemd-logind[1195]: New session 3 of user core. May 13 00:41:27.387295 systemd[1]: Started session-3.scope. May 13 00:41:27.437038 sshd[1289]: pam_unix(sshd:session): session closed for user core May 13 00:41:27.439610 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:54342.service: Deactivated successfully. May 13 00:41:27.440486 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:41:27.441044 systemd-logind[1195]: Session 3 logged out. Waiting for processes to exit. May 13 00:41:27.442144 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:54348.service. May 13 00:41:27.442858 systemd-logind[1195]: Removed session 3. May 13 00:41:27.470650 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 54348 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:27.471721 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:27.474766 systemd-logind[1195]: New session 4 of user core. May 13 00:41:27.475401 systemd[1]: Started session-4.scope. May 13 00:41:27.528509 sshd[1295]: pam_unix(sshd:session): session closed for user core May 13 00:41:27.530903 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:54348.service: Deactivated successfully. May 13 00:41:27.531360 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:41:27.531818 systemd-logind[1195]: Session 4 logged out. Waiting for processes to exit. May 13 00:41:27.532690 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:54364.service. May 13 00:41:27.533441 systemd-logind[1195]: Removed session 4. May 13 00:41:27.561005 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 54364 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:41:27.562036 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:27.564975 systemd-logind[1195]: New session 5 of user core. May 13 00:41:27.565626 systemd[1]: Started session-5.scope. May 13 00:41:27.620620 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:41:27.620866 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:41:27.642957 systemd[1]: Starting docker.service... May 13 00:41:27.676568 env[1317]: time="2025-05-13T00:41:27.676523457Z" level=info msg="Starting up" May 13 00:41:27.677653 env[1317]: time="2025-05-13T00:41:27.677619302Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:27.677653 env[1317]: time="2025-05-13T00:41:27.677645069Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:27.677715 env[1317]: time="2025-05-13T00:41:27.677668733Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:27.677715 env[1317]: time="2025-05-13T00:41:27.677679457Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:27.678936 env[1317]: time="2025-05-13T00:41:27.678908165Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:27.678936 env[1317]: time="2025-05-13T00:41:27.678926711Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:27.679007 env[1317]: time="2025-05-13T00:41:27.678942051Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:27.679007 env[1317]: time="2025-05-13T00:41:27.678951284Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:28.540415 env[1317]: time="2025-05-13T00:41:28.540373573Z" level=info msg="Loading containers: start." May 13 00:41:28.651864 kernel: Initializing XFRM netlink socket May 13 00:41:28.685141 env[1317]: time="2025-05-13T00:41:28.685093952Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:41:28.732631 systemd-networkd[1025]: docker0: Link UP May 13 00:41:28.748043 env[1317]: time="2025-05-13T00:41:28.748001509Z" level=info msg="Loading containers: done." May 13 00:41:28.758156 env[1317]: time="2025-05-13T00:41:28.758115217Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:41:28.758299 env[1317]: time="2025-05-13T00:41:28.758257932Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:41:28.758335 env[1317]: time="2025-05-13T00:41:28.758327001Z" level=info msg="Daemon has completed initialization" May 13 00:41:28.773746 systemd[1]: Started docker.service. May 13 00:41:28.777174 env[1317]: time="2025-05-13T00:41:28.777136503Z" level=info msg="API listen on /run/docker.sock" May 13 00:41:29.489638 env[1200]: time="2025-05-13T00:41:29.489586727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:41:30.049892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258360156.mount: Deactivated successfully. May 13 00:41:31.416014 env[1200]: time="2025-05-13T00:41:31.415961581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.418911 env[1200]: time="2025-05-13T00:41:31.418884285Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.421232 env[1200]: time="2025-05-13T00:41:31.421194721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.423145 env[1200]: time="2025-05-13T00:41:31.423120763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.423675 env[1200]: time="2025-05-13T00:41:31.423645688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 00:41:31.424180 env[1200]: time="2025-05-13T00:41:31.424158463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:41:33.070983 env[1200]: time="2025-05-13T00:41:33.070913513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.073217 env[1200]: time="2025-05-13T00:41:33.073188433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.075489 env[1200]: time="2025-05-13T00:41:33.075465062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.077341 env[1200]: time="2025-05-13T00:41:33.077286082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:33.078254 env[1200]: time="2025-05-13T00:41:33.078212155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 00:41:33.078762 env[1200]: time="2025-05-13T00:41:33.078736180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:41:35.021532 env[1200]: time="2025-05-13T00:41:35.021462612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.023455 env[1200]: time="2025-05-13T00:41:35.023409750Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.025083 env[1200]: time="2025-05-13T00:41:35.025048350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.026528 env[1200]: time="2025-05-13T00:41:35.026494135Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.027186 env[1200]: time="2025-05-13T00:41:35.027152733Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 00:41:35.027617 env[1200]: time="2025-05-13T00:41:35.027592664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:41:36.247455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199575953.mount: Deactivated successfully. May 13 00:41:37.074460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:41:37.074650 systemd[1]: Stopped kubelet.service. May 13 00:41:37.074690 systemd[1]: kubelet.service: Consumed 1.215s CPU time. May 13 00:41:37.076039 systemd[1]: Starting kubelet.service... May 13 00:41:37.156062 systemd[1]: Started kubelet.service. May 13 00:41:37.481446 kubelet[1452]: E0513 00:41:37.481297 1452 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:37.485502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:37.485626 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:37.873049 env[1200]: time="2025-05-13T00:41:37.872917076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.877332 env[1200]: time="2025-05-13T00:41:37.877276280Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.879879 env[1200]: time="2025-05-13T00:41:37.879843049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.881183 env[1200]: time="2025-05-13T00:41:37.881154735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:37.881541 env[1200]: time="2025-05-13T00:41:37.881517916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 00:41:37.882162 env[1200]: time="2025-05-13T00:41:37.881999238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:41:38.413679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527518112.mount: Deactivated successfully. May 13 00:41:39.388206 env[1200]: time="2025-05-13T00:41:39.388144053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.390001 env[1200]: time="2025-05-13T00:41:39.389957598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.391831 env[1200]: time="2025-05-13T00:41:39.391769534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.393525 env[1200]: time="2025-05-13T00:41:39.393493141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.394291 env[1200]: time="2025-05-13T00:41:39.394255974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 00:41:39.394799 env[1200]: time="2025-05-13T00:41:39.394772057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:41:39.855380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360520580.mount: Deactivated successfully. May 13 00:41:39.861885 env[1200]: time="2025-05-13T00:41:39.861847863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.863563 env[1200]: time="2025-05-13T00:41:39.863521774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.864854 env[1200]: time="2025-05-13T00:41:39.864819699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.865948 env[1200]: time="2025-05-13T00:41:39.865929400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.866559 env[1200]: time="2025-05-13T00:41:39.866525283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:41:39.866976 env[1200]: time="2025-05-13T00:41:39.866956536Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:41:40.718191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065753969.mount: Deactivated successfully. May 13 00:41:45.503166 env[1200]: time="2025-05-13T00:41:45.503090836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:45.505264 env[1200]: time="2025-05-13T00:41:45.505233306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:45.507276 env[1200]: time="2025-05-13T00:41:45.507247643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:45.509167 env[1200]: time="2025-05-13T00:41:45.509131769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:45.509960 env[1200]: time="2025-05-13T00:41:45.509925494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 00:41:47.568701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:41:47.568952 systemd[1]: Stopped kubelet.service. May 13 00:41:47.570204 systemd[1]: Starting kubelet.service... May 13 00:41:47.660924 systemd[1]: Started kubelet.service. May 13 00:41:47.695869 kubelet[1486]: E0513 00:41:47.695795 1486 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:47.697790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:47.697955 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:47.907565 systemd[1]: Stopped kubelet.service. May 13 00:41:47.909593 systemd[1]: Starting kubelet.service... May 13 00:41:47.931268 systemd[1]: Reloading. May 13 00:41:48.003071 /usr/lib/systemd/system-generators/torcx-generator[1521]: time="2025-05-13T00:41:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:48.003456 /usr/lib/systemd/system-generators/torcx-generator[1521]: time="2025-05-13T00:41:48Z" level=info msg="torcx already run" May 13 00:41:49.883203 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:49.883219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:49.901632 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:49.976676 systemd[1]: Started kubelet.service. May 13 00:41:49.977741 systemd[1]: Stopping kubelet.service... May 13 00:41:49.977979 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:41:49.978119 systemd[1]: Stopped kubelet.service. May 13 00:41:49.979302 systemd[1]: Starting kubelet.service... May 13 00:41:50.055940 systemd[1]: Started kubelet.service. May 13 00:41:50.116539 kubelet[1569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:50.116539 kubelet[1569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:41:50.116539 kubelet[1569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:50.116918 kubelet[1569]: I0513 00:41:50.116592 1569 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:41:50.300942 kubelet[1569]: I0513 00:41:50.300827 1569 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:41:50.300942 kubelet[1569]: I0513 00:41:50.300854 1569 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:41:50.301113 kubelet[1569]: I0513 00:41:50.301099 1569 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:41:50.335254 kubelet[1569]: E0513 00:41:50.335206 1569 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:50.336237 kubelet[1569]: I0513 00:41:50.336213 1569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:41:50.343220 kubelet[1569]: E0513 00:41:50.343186 1569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:41:50.343220 kubelet[1569]: I0513 00:41:50.343211 1569 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:41:50.346361 kubelet[1569]: I0513 00:41:50.346338 1569 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:41:50.346528 kubelet[1569]: I0513 00:41:50.346496 1569 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:41:50.346668 kubelet[1569]: I0513 00:41:50.346520 1569 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:41:50.346668 kubelet[1569]: I0513 00:41:50.346666 1569 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:41:50.346820 kubelet[1569]: I0513 00:41:50.346675 1569 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:41:50.346820 kubelet[1569]: I0513 00:41:50.346768 1569 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:50.349679 kubelet[1569]: I0513 00:41:50.349658 1569 kubelet.go:446] "Attempting to sync node with API server" May 13 00:41:50.349722 kubelet[1569]: I0513 00:41:50.349688 1569 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:41:50.349722 kubelet[1569]: I0513 00:41:50.349705 1569 kubelet.go:352] "Adding apiserver pod source" May 13 00:41:50.349722 kubelet[1569]: I0513 00:41:50.349714 1569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:41:50.357047 kubelet[1569]: W0513 00:41:50.356992 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:50.357047 kubelet[1569]: E0513 00:41:50.357060 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:50.357442 kubelet[1569]: W0513 00:41:50.357394 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:50.357442 kubelet[1569]: E0513 00:41:50.357432 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:50.360703 kubelet[1569]: I0513 00:41:50.360680 1569 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:41:50.361155 kubelet[1569]: I0513 00:41:50.361139 1569 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:41:50.363085 kubelet[1569]: W0513 00:41:50.363040 1569 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:41:50.371472 kubelet[1569]: I0513 00:41:50.371450 1569 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:41:50.371546 kubelet[1569]: I0513 00:41:50.371485 1569 server.go:1287] "Started kubelet" May 13 00:41:50.374076 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:41:50.374209 kubelet[1569]: I0513 00:41:50.374169 1569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:41:50.375182 kubelet[1569]: I0513 00:41:50.374589 1569 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:41:50.375616 kubelet[1569]: I0513 00:41:50.375590 1569 server.go:490] "Adding debug handlers to kubelet server" May 13 00:41:50.376371 kubelet[1569]: I0513 00:41:50.376322 1569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:41:50.376515 kubelet[1569]: I0513 00:41:50.376500 1569 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:41:50.376686 kubelet[1569]: I0513 00:41:50.376671 1569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:41:50.378277 kubelet[1569]: I0513 00:41:50.378250 1569 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:41:50.378670 kubelet[1569]: E0513 00:41:50.378652 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:50.379054 kubelet[1569]: E0513 00:41:50.378964 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" May 13 00:41:50.380734 kubelet[1569]: I0513 00:41:50.380713 1569 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:41:50.380785 kubelet[1569]: I0513 00:41:50.380753 1569 reconciler.go:26] "Reconciler: start to sync state" May 13 00:41:50.381277 kubelet[1569]: I0513 00:41:50.381245 1569 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:41:50.382216 kubelet[1569]: W0513 00:41:50.382156 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:50.382301 kubelet[1569]: E0513 00:41:50.382215 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:50.382369 kubelet[1569]: I0513 00:41:50.382349 1569 factory.go:221] Registration of the containerd container factory successfully May 13 00:41:50.382369 kubelet[1569]: I0513 00:41:50.382363 1569 factory.go:221] Registration of the systemd container factory successfully May 13 00:41:50.384940 kubelet[1569]: E0513 00:41:50.381920 1569 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef622adfa4aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:41:50.371464362 +0000 UTC m=+0.312165194,LastTimestamp:2025-05-13 00:41:50.371464362 +0000 UTC m=+0.312165194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:41:50.387264 kubelet[1569]: I0513 00:41:50.387214 1569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:41:50.387993 kubelet[1569]: I0513 00:41:50.387966 1569 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:41:50.387993 kubelet[1569]: I0513 00:41:50.387988 1569 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:41:50.388089 kubelet[1569]: I0513 00:41:50.388017 1569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:41:50.388089 kubelet[1569]: I0513 00:41:50.388028 1569 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:41:50.388151 kubelet[1569]: E0513 00:41:50.388079 1569 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:41:50.394145 kubelet[1569]: W0513 00:41:50.394119 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:50.394220 kubelet[1569]: E0513 00:41:50.394162 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:50.394485 kubelet[1569]: I0513 00:41:50.394441 1569 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:41:50.394485 kubelet[1569]: I0513 00:41:50.394473 1569 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:41:50.394485 kubelet[1569]: I0513 00:41:50.394489 1569 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:50.479742 kubelet[1569]: E0513 00:41:50.479716 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:50.488247 kubelet[1569]: E0513 00:41:50.488196 1569 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:41:50.579909 kubelet[1569]: E0513 00:41:50.579879 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:50.579995 kubelet[1569]: E0513 00:41:50.579969 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" May 13 00:41:50.680672 kubelet[1569]: E0513 00:41:50.680617 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:50.688867 kubelet[1569]: E0513 00:41:50.688841 1569 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:41:50.781240 kubelet[1569]: E0513 00:41:50.781168 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:50.882384 kubelet[1569]: E0513 00:41:50.882241 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:50.981089 kubelet[1569]: E0513 00:41:50.981051 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" May 13 00:41:50.983158 kubelet[1569]: E0513 00:41:50.983126 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:51.083825 kubelet[1569]: E0513 00:41:51.083785 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:51.089084 kubelet[1569]: E0513 00:41:51.089051 1569 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:41:51.204052 kubelet[1569]: E0513 00:41:51.184620 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:51.220245 kubelet[1569]: W0513 00:41:51.220216 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:51.220334 kubelet[1569]: E0513 00:41:51.220261 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:51.284995 kubelet[1569]: E0513 00:41:51.284959 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:51.321498 kubelet[1569]: W0513 00:41:51.321466 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:51.321556 kubelet[1569]: E0513 00:41:51.321494 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:51.382309 kubelet[1569]: W0513 00:41:51.382259 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:51.382309 kubelet[1569]: E0513 00:41:51.382307 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:51.385653 kubelet[1569]: E0513 00:41:51.385624 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:51.419256 kubelet[1569]: I0513 00:41:51.419204 1569 policy_none.go:49] "None policy: Start" May 13 00:41:51.419256 kubelet[1569]: I0513 00:41:51.419239 1569 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:41:51.419256 kubelet[1569]: I0513 00:41:51.419259 1569 state_mem.go:35] "Initializing new in-memory state store" May 13 00:41:51.486127 kubelet[1569]: E0513 00:41:51.485996 1569 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:51.529181 systemd[1]: Created slice kubepods.slice. May 13 00:41:51.533696 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:41:51.536233 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:41:51.541526 kubelet[1569]: I0513 00:41:51.541494 1569 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:41:51.541713 kubelet[1569]: I0513 00:41:51.541672 1569 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:41:51.541713 kubelet[1569]: I0513 00:41:51.541690 1569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:41:51.542421 kubelet[1569]: I0513 00:41:51.542233 1569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:41:51.543060 kubelet[1569]: E0513 00:41:51.543016 1569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:41:51.543229 kubelet[1569]: E0513 00:41:51.543083 1569 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:41:51.643322 kubelet[1569]: I0513 00:41:51.643294 1569 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:41:51.643733 kubelet[1569]: E0513 00:41:51.643685 1569 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 13 00:41:51.646361 kubelet[1569]: W0513 00:41:51.646304 1569 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused May 13 00:41:51.646409 kubelet[1569]: E0513 00:41:51.646368 1569 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:51.782322 kubelet[1569]: E0513 00:41:51.782188 1569 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" May 13 00:41:51.845901 kubelet[1569]: I0513 00:41:51.845865 1569 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:41:51.846326 kubelet[1569]: E0513 00:41:51.846286 1569 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 13 00:41:51.896438 systemd[1]: Created slice kubepods-burstable-poda099774ee6146077da31cffc520876d2.slice. May 13 00:41:51.904319 kubelet[1569]: E0513 00:41:51.904291 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:51.905647 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:41:51.910573 kubelet[1569]: E0513 00:41:51.910513 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:51.912525 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:41:51.913716 kubelet[1569]: E0513 00:41:51.913696 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:51.990225 kubelet[1569]: I0513 00:41:51.990172 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a099774ee6146077da31cffc520876d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a099774ee6146077da31cffc520876d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:51.990225 kubelet[1569]: I0513 00:41:51.990219 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a099774ee6146077da31cffc520876d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a099774ee6146077da31cffc520876d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:51.990225 kubelet[1569]: I0513 00:41:51.990241 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:51.990533 kubelet[1569]: I0513 00:41:51.990257 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:41:51.990533 kubelet[1569]: I0513 00:41:51.990273 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a099774ee6146077da31cffc520876d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a099774ee6146077da31cffc520876d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:51.990533 kubelet[1569]: I0513 00:41:51.990286 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:51.990533 kubelet[1569]: I0513 00:41:51.990299 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:51.990533 kubelet[1569]: I0513 00:41:51.990317 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:51.990706 kubelet[1569]: I0513 00:41:51.990332 1569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:52.204890 kubelet[1569]: E0513 00:41:52.204799 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:52.205591 env[1200]: time="2025-05-13T00:41:52.205551918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a099774ee6146077da31cffc520876d2,Namespace:kube-system,Attempt:0,}" May 13 00:41:52.211779 kubelet[1569]: E0513 00:41:52.211731 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:52.212161 env[1200]: time="2025-05-13T00:41:52.212112229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:41:52.214372 kubelet[1569]: E0513 00:41:52.214337 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:52.214704 env[1200]: time="2025-05-13T00:41:52.214678134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:41:52.247836 kubelet[1569]: I0513 00:41:52.247819 1569 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:41:52.248209 kubelet[1569]: E0513 00:41:52.248162 1569 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 13 00:41:52.399165 kubelet[1569]: E0513 00:41:52.399119 1569 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" May 13 00:41:52.905897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825482716.mount: Deactivated successfully. May 13 00:41:52.912545 env[1200]: time="2025-05-13T00:41:52.912492710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.915072 env[1200]: time="2025-05-13T00:41:52.915033811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.915910 env[1200]: time="2025-05-13T00:41:52.915880260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.916777 env[1200]: time="2025-05-13T00:41:52.916742208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.919301 env[1200]: time="2025-05-13T00:41:52.919270938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.920409 env[1200]: time="2025-05-13T00:41:52.920372159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.922138 env[1200]: time="2025-05-13T00:41:52.922095225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.923344 env[1200]: time="2025-05-13T00:41:52.923318760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.924430 env[1200]: time="2025-05-13T00:41:52.924405585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.924989 env[1200]: time="2025-05-13T00:41:52.924962421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.926088 env[1200]: time="2025-05-13T00:41:52.926060315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.928199 env[1200]: time="2025-05-13T00:41:52.928165519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:52.946550 env[1200]: time="2025-05-13T00:41:52.946481382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:52.946550 env[1200]: time="2025-05-13T00:41:52.946522808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:52.946550 env[1200]: time="2025-05-13T00:41:52.946532323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:52.946707 env[1200]: time="2025-05-13T00:41:52.946666968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b238156c935c5adf74c28fdd914d92b5168e145439353042450b73844459ddc pid=1610 runtime=io.containerd.runc.v2 May 13 00:41:52.957433 systemd[1]: Started cri-containerd-7b238156c935c5adf74c28fdd914d92b5168e145439353042450b73844459ddc.scope. May 13 00:41:52.960193 env[1200]: time="2025-05-13T00:41:52.960119237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:52.960382 env[1200]: time="2025-05-13T00:41:52.960357548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:52.960513 env[1200]: time="2025-05-13T00:41:52.960490459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:52.962109 env[1200]: time="2025-05-13T00:41:52.962068542Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3993f40530330482f8931e34cd3eb1f0d2048b771ae0ce96d540af06bb7f019 pid=1638 runtime=io.containerd.runc.v2 May 13 00:41:52.963473 env[1200]: time="2025-05-13T00:41:52.963405328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:52.963473 env[1200]: time="2025-05-13T00:41:52.963453561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:52.963473 env[1200]: time="2025-05-13T00:41:52.963463176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:52.963622 env[1200]: time="2025-05-13T00:41:52.963582953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51c139e10dddc6be34ba3372a3fbf208996ceef7b5ba332ef524ceab88ba5bb1 pid=1652 runtime=io.containerd.runc.v2 May 13 00:41:52.972221 systemd[1]: Started cri-containerd-b3993f40530330482f8931e34cd3eb1f0d2048b771ae0ce96d540af06bb7f019.scope. May 13 00:41:52.977875 systemd[1]: Started cri-containerd-51c139e10dddc6be34ba3372a3fbf208996ceef7b5ba332ef524ceab88ba5bb1.scope. May 13 00:41:53.000984 env[1200]: time="2025-05-13T00:41:52.999854435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b238156c935c5adf74c28fdd914d92b5168e145439353042450b73844459ddc\"" May 13 00:41:53.001997 kubelet[1569]: E0513 00:41:53.001973 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.006296 env[1200]: time="2025-05-13T00:41:53.006253043Z" level=info msg="CreateContainer within sandbox \"7b238156c935c5adf74c28fdd914d92b5168e145439353042450b73844459ddc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:41:53.015686 env[1200]: time="2025-05-13T00:41:53.015650989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3993f40530330482f8931e34cd3eb1f0d2048b771ae0ce96d540af06bb7f019\"" May 13 00:41:53.016618 kubelet[1569]: E0513 00:41:53.016487 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.017795 env[1200]: time="2025-05-13T00:41:53.017775220Z" level=info msg="CreateContainer within sandbox \"b3993f40530330482f8931e34cd3eb1f0d2048b771ae0ce96d540af06bb7f019\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:41:53.021318 env[1200]: time="2025-05-13T00:41:53.021270135Z" level=info msg="CreateContainer within sandbox \"7b238156c935c5adf74c28fdd914d92b5168e145439353042450b73844459ddc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06cceb5599d6c526cfcef104858a2306ab3fb1eaa38dca037092ac25ee79909e\"" May 13 00:41:53.021921 env[1200]: time="2025-05-13T00:41:53.021844658Z" level=info msg="StartContainer for \"06cceb5599d6c526cfcef104858a2306ab3fb1eaa38dca037092ac25ee79909e\"" May 13 00:41:53.026837 env[1200]: time="2025-05-13T00:41:53.026797203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a099774ee6146077da31cffc520876d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c139e10dddc6be34ba3372a3fbf208996ceef7b5ba332ef524ceab88ba5bb1\"" May 13 00:41:53.027377 kubelet[1569]: E0513 00:41:53.027350 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.029138 env[1200]: time="2025-05-13T00:41:53.029097069Z" level=info msg="CreateContainer within sandbox \"51c139e10dddc6be34ba3372a3fbf208996ceef7b5ba332ef524ceab88ba5bb1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:41:53.036147 env[1200]: time="2025-05-13T00:41:53.036116023Z" level=info msg="CreateContainer within sandbox \"b3993f40530330482f8931e34cd3eb1f0d2048b771ae0ce96d540af06bb7f019\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f409c23c3b546d65968f39f5d572e927b9ac7a2c5b62354cdfbd8af97b38617b\"" May 13 00:41:53.036571 env[1200]: time="2025-05-13T00:41:53.036542038Z" level=info msg="StartContainer for \"f409c23c3b546d65968f39f5d572e927b9ac7a2c5b62354cdfbd8af97b38617b\"" May 13 00:41:53.038811 systemd[1]: Started cri-containerd-06cceb5599d6c526cfcef104858a2306ab3fb1eaa38dca037092ac25ee79909e.scope. May 13 00:41:53.050586 env[1200]: time="2025-05-13T00:41:53.046325037Z" level=info msg="CreateContainer within sandbox \"51c139e10dddc6be34ba3372a3fbf208996ceef7b5ba332ef524ceab88ba5bb1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bbe5dca12675f538c59803db066ef5b4581c31f06a52b76ae6f87bf38b78395c\"" May 13 00:41:53.050586 env[1200]: time="2025-05-13T00:41:53.046698180Z" level=info msg="StartContainer for \"bbe5dca12675f538c59803db066ef5b4581c31f06a52b76ae6f87bf38b78395c\"" May 13 00:41:53.050743 kubelet[1569]: I0513 00:41:53.049404 1569 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:41:53.050743 kubelet[1569]: E0513 00:41:53.049738 1569 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" May 13 00:41:53.054541 systemd[1]: Started cri-containerd-f409c23c3b546d65968f39f5d572e927b9ac7a2c5b62354cdfbd8af97b38617b.scope. May 13 00:41:53.061283 systemd[1]: Started cri-containerd-bbe5dca12675f538c59803db066ef5b4581c31f06a52b76ae6f87bf38b78395c.scope. May 13 00:41:53.085308 env[1200]: time="2025-05-13T00:41:53.085270439Z" level=info msg="StartContainer for \"06cceb5599d6c526cfcef104858a2306ab3fb1eaa38dca037092ac25ee79909e\" returns successfully" May 13 00:41:53.101049 env[1200]: time="2025-05-13T00:41:53.101010321Z" level=info msg="StartContainer for \"f409c23c3b546d65968f39f5d572e927b9ac7a2c5b62354cdfbd8af97b38617b\" returns successfully" May 13 00:41:53.112863 env[1200]: time="2025-05-13T00:41:53.112818846Z" level=info msg="StartContainer for \"bbe5dca12675f538c59803db066ef5b4581c31f06a52b76ae6f87bf38b78395c\" returns successfully" May 13 00:41:53.404548 kubelet[1569]: E0513 00:41:53.404519 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:53.405883 kubelet[1569]: E0513 00:41:53.405871 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.407958 kubelet[1569]: E0513 00:41:53.407945 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:53.408236 kubelet[1569]: E0513 00:41:53.408223 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:53.409304 kubelet[1569]: E0513 00:41:53.409292 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:53.409464 kubelet[1569]: E0513 00:41:53.409451 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.306977 kubelet[1569]: E0513 00:41:54.306929 1569 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:41:54.359366 kubelet[1569]: I0513 00:41:54.359325 1569 apiserver.go:52] "Watching apiserver" May 13 00:41:54.381442 kubelet[1569]: I0513 00:41:54.381404 1569 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:41:54.411248 kubelet[1569]: E0513 00:41:54.411223 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:54.411456 kubelet[1569]: E0513 00:41:54.411353 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.411456 kubelet[1569]: E0513 00:41:54.411399 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:54.411533 kubelet[1569]: E0513 00:41:54.411518 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.435164 kubelet[1569]: E0513 00:41:54.435083 1569 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183eef622adfa4aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:41:50.371464362 +0000 UTC m=+0.312165194,LastTimestamp:2025-05-13 00:41:50.371464362 +0000 UTC m=+0.312165194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:41:54.530269 kubelet[1569]: E0513 00:41:54.530147 1569 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183eef622c32b2f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:41:50.393684727 +0000 UTC m=+0.334385558,LastTimestamp:2025-05-13 00:41:50.393684727 +0000 UTC m=+0.334385558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:41:54.651178 kubelet[1569]: I0513 00:41:54.651143 1569 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:41:54.782467 kubelet[1569]: E0513 00:41:54.782430 1569 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:41:54.782599 kubelet[1569]: E0513 00:41:54.782583 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.785266 kubelet[1569]: E0513 00:41:54.785183 1569 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183eef622c32c3ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:41:50.393689069 +0000 UTC m=+0.334389900,LastTimestamp:2025-05-13 00:41:50.393689069 +0000 UTC m=+0.334389900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:41:54.785814 kubelet[1569]: I0513 00:41:54.785779 1569 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:41:54.878979 kubelet[1569]: I0513 00:41:54.878940 1569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:41:54.996142 kubelet[1569]: E0513 00:41:54.996025 1569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:41:54.996142 kubelet[1569]: I0513 00:41:54.996054 1569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:41:54.997764 kubelet[1569]: E0513 00:41:54.997743 1569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:41:54.997764 kubelet[1569]: I0513 00:41:54.997760 1569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:54.999100 kubelet[1569]: E0513 00:41:54.999071 1569 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:55.583252 kubelet[1569]: I0513 00:41:55.583221 1569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:41:55.693952 kubelet[1569]: E0513 00:41:55.693917 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:56.413495 kubelet[1569]: E0513 00:41:56.413458 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:56.415485 kubelet[1569]: I0513 00:41:56.415451 1569 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:41:56.510014 kubelet[1569]: E0513 00:41:56.509967 1569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:56.993261 systemd[1]: Reloading. May 13 00:41:57.066123 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-05-13T00:41:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:57.066147 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-05-13T00:41:57Z" level=info msg="torcx already run" May 13 00:41:57.116123 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:57.116136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:57.132733 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:57.219393 systemd[1]: Stopping kubelet.service... May 13 00:41:57.241070 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:41:57.241224 systemd[1]: Stopped kubelet.service. May 13 00:41:57.242578 systemd[1]: Starting kubelet.service... May 13 00:41:57.323476 systemd[1]: Started kubelet.service. May 13 00:41:57.356379 kubelet[1911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:57.356379 kubelet[1911]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:41:57.356379 kubelet[1911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:57.356745 kubelet[1911]: I0513 00:41:57.356431 1911 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:41:57.362988 kubelet[1911]: I0513 00:41:57.362962 1911 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:41:57.363077 kubelet[1911]: I0513 00:41:57.363063 1911 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:41:57.363411 kubelet[1911]: I0513 00:41:57.363394 1911 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:41:57.364582 kubelet[1911]: I0513 00:41:57.364568 1911 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:41:57.366565 kubelet[1911]: I0513 00:41:57.366550 1911 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:41:57.369677 kubelet[1911]: E0513 00:41:57.369646 1911 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:41:57.369677 kubelet[1911]: I0513 00:41:57.369678 1911 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:41:57.374030 kubelet[1911]: I0513 00:41:57.374005 1911 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:41:57.374219 kubelet[1911]: I0513 00:41:57.374169 1911 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:41:57.374482 kubelet[1911]: I0513 00:41:57.374211 1911 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:41:57.374669 kubelet[1911]: I0513 00:41:57.374650 1911 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:41:57.374700 kubelet[1911]: I0513 00:41:57.374678 1911 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:41:57.374738 kubelet[1911]: I0513 00:41:57.374727 1911 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:57.374918 kubelet[1911]: I0513 00:41:57.374895 1911 kubelet.go:446] "Attempting to sync node with API server" May 13 00:41:57.374918 kubelet[1911]: I0513 00:41:57.374913 1911 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:41:57.374988 kubelet[1911]: I0513 00:41:57.374931 1911 kubelet.go:352] "Adding apiserver pod source" May 13 00:41:57.374988 kubelet[1911]: I0513 00:41:57.374941 1911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:41:57.375453 kubelet[1911]: I0513 00:41:57.375435 1911 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:41:57.375829 kubelet[1911]: I0513 00:41:57.375818 1911 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:41:57.376294 kubelet[1911]: I0513 00:41:57.376284 1911 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:41:57.376379 kubelet[1911]: I0513 00:41:57.376366 1911 server.go:1287] "Started kubelet" May 13 00:41:57.376536 kubelet[1911]: I0513 00:41:57.376509 1911 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:41:57.376778 kubelet[1911]: I0513 00:41:57.376740 1911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:41:57.377073 kubelet[1911]: I0513 00:41:57.377062 1911 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:41:57.377461 kubelet[1911]: I0513 00:41:57.377367 1911 server.go:490] "Adding debug handlers to kubelet server" May 13 00:41:57.379060 kubelet[1911]: I0513 00:41:57.379044 1911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:41:57.379717 kubelet[1911]: E0513 00:41:57.379705 1911 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:41:57.381862 kubelet[1911]: I0513 00:41:57.381848 1911 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:41:57.386860 kubelet[1911]: I0513 00:41:57.385018 1911 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:41:57.386860 kubelet[1911]: I0513 00:41:57.385082 1911 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:41:57.386860 kubelet[1911]: I0513 00:41:57.385157 1911 reconciler.go:26] "Reconciler: start to sync state" May 13 00:41:57.387797 kubelet[1911]: I0513 00:41:57.387613 1911 factory.go:221] Registration of the systemd container factory successfully May 13 00:41:57.389942 kubelet[1911]: I0513 00:41:57.389921 1911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:41:57.391098 kubelet[1911]: I0513 00:41:57.391077 1911 factory.go:221] Registration of the containerd container factory successfully May 13 00:41:57.395920 kubelet[1911]: I0513 00:41:57.395870 1911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:41:57.400186 kubelet[1911]: I0513 00:41:57.400157 1911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:41:57.400300 kubelet[1911]: I0513 00:41:57.400193 1911 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:41:57.400300 kubelet[1911]: I0513 00:41:57.400226 1911 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:41:57.400300 kubelet[1911]: I0513 00:41:57.400234 1911 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:41:57.400300 kubelet[1911]: E0513 00:41:57.400276 1911 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:41:57.424254 kubelet[1911]: I0513 00:41:57.424226 1911 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:41:57.424254 kubelet[1911]: I0513 00:41:57.424243 1911 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:41:57.424254 kubelet[1911]: I0513 00:41:57.424260 1911 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:57.424441 kubelet[1911]: I0513 00:41:57.424386 1911 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:41:57.424441 kubelet[1911]: I0513 00:41:57.424395 1911 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:41:57.424441 kubelet[1911]: I0513 00:41:57.424409 1911 policy_none.go:49] "None policy: Start" May 13 00:41:57.424441 kubelet[1911]: I0513 00:41:57.424417 1911 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:41:57.424441 kubelet[1911]: I0513 00:41:57.424424 1911 state_mem.go:35] "Initializing new in-memory state store" May 13 00:41:57.424544 kubelet[1911]: I0513 00:41:57.424502 1911 state_mem.go:75] "Updated machine memory state" May 13 00:41:57.427392 kubelet[1911]: I0513 00:41:57.427372 1911 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:41:57.430082 kubelet[1911]: I0513 00:41:57.430066 1911 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:41:57.430130 kubelet[1911]: I0513 00:41:57.430092 1911 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:41:57.431600 kubelet[1911]: I0513 00:41:57.431573 1911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:41:57.433073 kubelet[1911]: E0513 00:41:57.433047 1911 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:41:57.501049 kubelet[1911]: I0513 00:41:57.500998 1911 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:41:57.501049 kubelet[1911]: I0513 00:41:57.501029 1911 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:57.501238 kubelet[1911]: I0513 00:41:57.501106 1911 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:41:57.505648 kubelet[1911]: E0513 00:41:57.505616 1911 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:41:57.506071 kubelet[1911]: E0513 00:41:57.506033 1911 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:41:57.534051 kubelet[1911]: I0513 00:41:57.534024 1911 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:41:57.538860 kubelet[1911]: I0513 00:41:57.538842 1911 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:41:57.538936 kubelet[1911]: I0513 00:41:57.538900 1911 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:41:57.686823 kubelet[1911]: I0513 00:41:57.686715 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:57.686823 kubelet[1911]: I0513 00:41:57.686743 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:57.686823 kubelet[1911]: I0513 00:41:57.686768 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:41:57.686823 kubelet[1911]: I0513 00:41:57.686782 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a099774ee6146077da31cffc520876d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a099774ee6146077da31cffc520876d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:57.686823 kubelet[1911]: I0513 00:41:57.686799 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:57.687055 kubelet[1911]: I0513 00:41:57.686827 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:57.687055 kubelet[1911]: I0513 00:41:57.686841 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:57.687055 kubelet[1911]: I0513 00:41:57.686892 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a099774ee6146077da31cffc520876d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a099774ee6146077da31cffc520876d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:57.687055 kubelet[1911]: I0513 00:41:57.686919 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a099774ee6146077da31cffc520876d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a099774ee6146077da31cffc520876d2\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:57.806849 kubelet[1911]: E0513 00:41:57.806820 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:57.806849 kubelet[1911]: E0513 00:41:57.806848 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:57.807031 kubelet[1911]: E0513 00:41:57.806897 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:58.093173 sudo[1946]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:41:58.093346 sudo[1946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:41:58.376051 kubelet[1911]: I0513 00:41:58.375936 1911 apiserver.go:52] "Watching apiserver" May 13 00:41:58.385470 kubelet[1911]: I0513 00:41:58.385431 1911 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:41:58.412207 kubelet[1911]: E0513 00:41:58.412167 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:58.413189 kubelet[1911]: I0513 00:41:58.412688 1911 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:58.413189 kubelet[1911]: I0513 00:41:58.412921 1911 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:41:58.596437 kubelet[1911]: E0513 00:41:58.596229 1911 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:41:58.596566 kubelet[1911]: E0513 00:41:58.596467 1911 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:58.596600 kubelet[1911]: E0513 00:41:58.596589 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:58.596845 sudo[1946]: pam_unix(sudo:session): session closed for user root May 13 00:41:58.596966 kubelet[1911]: E0513 00:41:58.596859 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:58.804687 kubelet[1911]: I0513 00:41:58.804555 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.804536365 podStartE2EDuration="2.804536365s" podCreationTimestamp="2025-05-13 00:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:58.796637403 +0000 UTC m=+1.470179294" watchObservedRunningTime="2025-05-13 00:41:58.804536365 +0000 UTC m=+1.478078256" May 13 00:41:58.811784 kubelet[1911]: I0513 00:41:58.811733 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.811697395 podStartE2EDuration="3.811697395s" podCreationTimestamp="2025-05-13 00:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:58.811386312 +0000 UTC m=+1.484928213" watchObservedRunningTime="2025-05-13 00:41:58.811697395 +0000 UTC m=+1.485239296" May 13 00:41:58.812055 kubelet[1911]: I0513 00:41:58.812028 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8120137299999999 podStartE2EDuration="1.81201373s" podCreationTimestamp="2025-05-13 00:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:58.804973397 +0000 UTC m=+1.478515298" watchObservedRunningTime="2025-05-13 00:41:58.81201373 +0000 UTC m=+1.485555651" May 13 00:41:59.413285 kubelet[1911]: E0513 00:41:59.413253 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:59.413624 kubelet[1911]: E0513 00:41:59.413334 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:59.413624 kubelet[1911]: E0513 00:41:59.413357 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:59.976030 sudo[1305]: pam_unix(sudo:session): session closed for user root May 13 00:41:59.977558 sshd[1302]: pam_unix(sshd:session): session closed for user core May 13 00:41:59.979671 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:54364.service: Deactivated successfully. May 13 00:41:59.980437 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:41:59.980563 systemd[1]: session-5.scope: Consumed 4.054s CPU time. May 13 00:41:59.981209 systemd-logind[1195]: Session 5 logged out. Waiting for processes to exit. May 13 00:41:59.982063 systemd-logind[1195]: Removed session 5. May 13 00:42:00.414284 kubelet[1911]: E0513 00:42:00.414261 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:01.953481 kubelet[1911]: E0513 00:42:01.953443 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:02.128829 kubelet[1911]: I0513 00:42:02.128786 1911 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:42:02.129178 env[1200]: time="2025-05-13T00:42:02.129134151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:42:02.129625 kubelet[1911]: I0513 00:42:02.129390 1911 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:42:02.917893 systemd[1]: Created slice kubepods-burstable-pod35319098_239c_4ce5_987f_741bdd6be0b5.slice. May 13 00:42:02.922937 systemd[1]: Created slice kubepods-besteffort-pod53a1eea4_734b_4384_af8f_9140955e2c6b.slice. May 13 00:42:02.924689 kubelet[1911]: I0513 00:42:02.924653 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53a1eea4-734b-4384-af8f-9140955e2c6b-lib-modules\") pod \"kube-proxy-5wqwm\" (UID: \"53a1eea4-734b-4384-af8f-9140955e2c6b\") " pod="kube-system/kube-proxy-5wqwm" May 13 00:42:02.924689 kubelet[1911]: I0513 00:42:02.924691 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53a1eea4-734b-4384-af8f-9140955e2c6b-kube-proxy\") pod \"kube-proxy-5wqwm\" (UID: \"53a1eea4-734b-4384-af8f-9140955e2c6b\") " pod="kube-system/kube-proxy-5wqwm" May 13 00:42:02.924846 kubelet[1911]: I0513 00:42:02.924709 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-hostproc\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.924846 kubelet[1911]: I0513 00:42:02.924721 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-lib-modules\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.924927 kubelet[1911]: I0513 00:42:02.924850 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-xtables-lock\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.924927 kubelet[1911]: I0513 00:42:02.924872 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-hubble-tls\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.924927 kubelet[1911]: I0513 00:42:02.924889 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cni-path\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.924927 kubelet[1911]: I0513 00:42:02.924910 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-etc-cni-netd\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.924927 kubelet[1911]: I0513 00:42:02.924925 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzrbq\" (UniqueName: \"kubernetes.io/projected/53a1eea4-734b-4384-af8f-9140955e2c6b-kube-api-access-nzrbq\") pod \"kube-proxy-5wqwm\" (UID: \"53a1eea4-734b-4384-af8f-9140955e2c6b\") " pod="kube-system/kube-proxy-5wqwm" May 13 00:42:02.925100 kubelet[1911]: I0513 00:42:02.924944 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-cgroup\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925100 kubelet[1911]: I0513 00:42:02.925081 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-config-path\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925100 kubelet[1911]: I0513 00:42:02.925098 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-kernel\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925213 kubelet[1911]: I0513 00:42:02.925115 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-bpf-maps\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925213 kubelet[1911]: I0513 00:42:02.925131 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-net\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925213 kubelet[1911]: I0513 00:42:02.925146 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grcb\" (UniqueName: \"kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-kube-api-access-4grcb\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925213 kubelet[1911]: I0513 00:42:02.925172 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53a1eea4-734b-4384-af8f-9140955e2c6b-xtables-lock\") pod \"kube-proxy-5wqwm\" (UID: \"53a1eea4-734b-4384-af8f-9140955e2c6b\") " pod="kube-system/kube-proxy-5wqwm" May 13 00:42:02.925213 kubelet[1911]: I0513 00:42:02.925190 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-run\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:02.925213 kubelet[1911]: I0513 00:42:02.925210 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35319098-239c-4ce5-987f-741bdd6be0b5-clustermesh-secrets\") pod \"cilium-49qzn\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " pod="kube-system/cilium-49qzn" May 13 00:42:03.026499 kubelet[1911]: I0513 00:42:03.026467 1911 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 00:42:03.221024 kubelet[1911]: E0513 00:42:03.220901 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:03.222425 env[1200]: time="2025-05-13T00:42:03.222385976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49qzn,Uid:35319098-239c-4ce5-987f-741bdd6be0b5,Namespace:kube-system,Attempt:0,}" May 13 00:42:03.229594 kubelet[1911]: E0513 00:42:03.229565 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:03.230060 env[1200]: time="2025-05-13T00:42:03.230021158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wqwm,Uid:53a1eea4-734b-4384-af8f-9140955e2c6b,Namespace:kube-system,Attempt:0,}" May 13 00:42:03.854994 systemd[1]: Created slice kubepods-besteffort-pod1bb07763_a021_4348_ace4_06a913442246.slice. May 13 00:42:03.931251 kubelet[1911]: I0513 00:42:03.931203 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bb07763-a021-4348-ace4-06a913442246-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zxmlh\" (UID: \"1bb07763-a021-4348-ace4-06a913442246\") " pod="kube-system/cilium-operator-6c4d7847fc-zxmlh" May 13 00:42:03.931251 kubelet[1911]: I0513 00:42:03.931243 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpq2m\" (UniqueName: \"kubernetes.io/projected/1bb07763-a021-4348-ace4-06a913442246-kube-api-access-xpq2m\") pod \"cilium-operator-6c4d7847fc-zxmlh\" (UID: \"1bb07763-a021-4348-ace4-06a913442246\") " pod="kube-system/cilium-operator-6c4d7847fc-zxmlh" May 13 00:42:04.158935 kubelet[1911]: E0513 00:42:04.158786 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:04.159287 env[1200]: time="2025-05-13T00:42:04.159239604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zxmlh,Uid:1bb07763-a021-4348-ace4-06a913442246,Namespace:kube-system,Attempt:0,}" May 13 00:42:04.171887 env[1200]: time="2025-05-13T00:42:04.171824193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:04.172033 env[1200]: time="2025-05-13T00:42:04.171866153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:04.172033 env[1200]: time="2025-05-13T00:42:04.171876155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:04.172179 env[1200]: time="2025-05-13T00:42:04.172056919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a pid=2007 runtime=io.containerd.runc.v2 May 13 00:42:04.189947 systemd[1]: Started cri-containerd-e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a.scope. May 13 00:42:04.190990 env[1200]: time="2025-05-13T00:42:04.190464150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:04.190990 env[1200]: time="2025-05-13T00:42:04.190502825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:04.190990 env[1200]: time="2025-05-13T00:42:04.190525264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:04.190990 env[1200]: time="2025-05-13T00:42:04.190626764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/189679b6a60d3c5b2daec03aff475eed6d135796e303360062f908c26a5aa52c pid=2034 runtime=io.containerd.runc.v2 May 13 00:42:04.205871 systemd[1]: Started cri-containerd-189679b6a60d3c5b2daec03aff475eed6d135796e303360062f908c26a5aa52c.scope. May 13 00:42:04.214754 env[1200]: time="2025-05-13T00:42:04.214665484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:04.214754 env[1200]: time="2025-05-13T00:42:04.214706934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:04.214754 env[1200]: time="2025-05-13T00:42:04.214716726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:04.215191 env[1200]: time="2025-05-13T00:42:04.215128741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb pid=2076 runtime=io.containerd.runc.v2 May 13 00:42:04.217591 env[1200]: time="2025-05-13T00:42:04.217177327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-49qzn,Uid:35319098-239c-4ce5-987f-741bdd6be0b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\"" May 13 00:42:04.218048 kubelet[1911]: E0513 00:42:04.218022 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:04.221331 env[1200]: time="2025-05-13T00:42:04.221291274Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:42:04.230940 systemd[1]: Started cri-containerd-8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb.scope. May 13 00:42:04.237598 kubelet[1911]: E0513 00:42:04.234143 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:04.237646 env[1200]: time="2025-05-13T00:42:04.233638276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5wqwm,Uid:53a1eea4-734b-4384-af8f-9140955e2c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"189679b6a60d3c5b2daec03aff475eed6d135796e303360062f908c26a5aa52c\"" May 13 00:42:04.237646 env[1200]: time="2025-05-13T00:42:04.235852181Z" level=info msg="CreateContainer within sandbox \"189679b6a60d3c5b2daec03aff475eed6d135796e303360062f908c26a5aa52c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:42:04.252269 env[1200]: time="2025-05-13T00:42:04.252233978Z" level=info msg="CreateContainer within sandbox \"189679b6a60d3c5b2daec03aff475eed6d135796e303360062f908c26a5aa52c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3af873416246f56d27467e35f2e51ec337dd2b72fc55305e32c60a20f17a6bf4\"" May 13 00:42:04.253595 env[1200]: time="2025-05-13T00:42:04.253565203Z" level=info msg="StartContainer for \"3af873416246f56d27467e35f2e51ec337dd2b72fc55305e32c60a20f17a6bf4\"" May 13 00:42:04.270333 systemd[1]: Started cri-containerd-3af873416246f56d27467e35f2e51ec337dd2b72fc55305e32c60a20f17a6bf4.scope. May 13 00:42:04.277031 env[1200]: time="2025-05-13T00:42:04.276991772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zxmlh,Uid:1bb07763-a021-4348-ace4-06a913442246,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb\"" May 13 00:42:04.283628 kubelet[1911]: E0513 00:42:04.283593 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:04.298463 env[1200]: time="2025-05-13T00:42:04.298416587Z" level=info msg="StartContainer for \"3af873416246f56d27467e35f2e51ec337dd2b72fc55305e32c60a20f17a6bf4\" returns successfully" May 13 00:42:04.423960 kubelet[1911]: E0513 00:42:04.423847 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:04.433288 kubelet[1911]: I0513 00:42:04.433242 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5wqwm" podStartSLOduration=2.433226322 podStartE2EDuration="2.433226322s" podCreationTimestamp="2025-05-13 00:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:04.433060352 +0000 UTC m=+7.106602253" watchObservedRunningTime="2025-05-13 00:42:04.433226322 +0000 UTC m=+7.106768213" May 13 00:42:05.076814 systemd[1]: run-containerd-runc-k8s.io-e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a-runc.SgeFqR.mount: Deactivated successfully. May 13 00:42:07.185280 kubelet[1911]: E0513 00:42:07.185233 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:07.430582 kubelet[1911]: E0513 00:42:07.430501 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.432204 kubelet[1911]: E0513 00:42:08.432165 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:08.944604 kubelet[1911]: E0513 00:42:08.944572 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:10.062254 update_engine[1196]: I0513 00:42:10.062206 1196 update_attempter.cc:509] Updating boot flags... May 13 00:42:11.958041 kubelet[1911]: E0513 00:42:11.957970 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:12.301584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450425400.mount: Deactivated successfully. May 13 00:42:12.438368 kubelet[1911]: E0513 00:42:12.438334 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:17.672164 env[1200]: time="2025-05-13T00:42:17.672105493Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:17.673844 env[1200]: time="2025-05-13T00:42:17.673798084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:17.675381 env[1200]: time="2025-05-13T00:42:17.675346752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:17.675842 env[1200]: time="2025-05-13T00:42:17.675799212Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:42:17.676890 env[1200]: time="2025-05-13T00:42:17.676858736Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:42:17.677716 env[1200]: time="2025-05-13T00:42:17.677687180Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:42:17.690077 env[1200]: time="2025-05-13T00:42:17.690014620Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\"" May 13 00:42:17.690475 env[1200]: time="2025-05-13T00:42:17.690453442Z" level=info msg="StartContainer for \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\"" May 13 00:42:17.708104 systemd[1]: Started cri-containerd-5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba.scope. May 13 00:42:17.734890 systemd[1]: cri-containerd-5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba.scope: Deactivated successfully. May 13 00:42:18.126859 env[1200]: time="2025-05-13T00:42:18.126797469Z" level=info msg="StartContainer for \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\" returns successfully" May 13 00:42:18.365574 env[1200]: time="2025-05-13T00:42:18.365518677Z" level=info msg="shim disconnected" id=5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba May 13 00:42:18.365574 env[1200]: time="2025-05-13T00:42:18.365575732Z" level=warning msg="cleaning up after shim disconnected" id=5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba namespace=k8s.io May 13 00:42:18.365574 env[1200]: time="2025-05-13T00:42:18.365585271Z" level=info msg="cleaning up dead shim" May 13 00:42:18.371999 env[1200]: time="2025-05-13T00:42:18.371947902Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2354 runtime=io.containerd.runc.v2\n" May 13 00:42:18.448886 kubelet[1911]: E0513 00:42:18.448757 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:18.451544 env[1200]: time="2025-05-13T00:42:18.451496757Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:42:18.470059 env[1200]: time="2025-05-13T00:42:18.470012211Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\"" May 13 00:42:18.470567 env[1200]: time="2025-05-13T00:42:18.470517254Z" level=info msg="StartContainer for \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\"" May 13 00:42:18.483986 systemd[1]: Started cri-containerd-376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030.scope. May 13 00:42:18.507929 env[1200]: time="2025-05-13T00:42:18.507866047Z" level=info msg="StartContainer for \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\" returns successfully" May 13 00:42:18.515125 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:42:18.515386 systemd[1]: Stopped systemd-sysctl.service. May 13 00:42:18.515543 systemd[1]: Stopping systemd-sysctl.service... May 13 00:42:18.516855 systemd[1]: Starting systemd-sysctl.service... May 13 00:42:18.517894 systemd[1]: cri-containerd-376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030.scope: Deactivated successfully. May 13 00:42:18.523893 systemd[1]: Finished systemd-sysctl.service. May 13 00:42:18.537941 env[1200]: time="2025-05-13T00:42:18.537889909Z" level=info msg="shim disconnected" id=376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030 May 13 00:42:18.537941 env[1200]: time="2025-05-13T00:42:18.537929700Z" level=warning msg="cleaning up after shim disconnected" id=376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030 namespace=k8s.io May 13 00:42:18.537941 env[1200]: time="2025-05-13T00:42:18.537939450Z" level=info msg="cleaning up dead shim" May 13 00:42:18.543411 env[1200]: time="2025-05-13T00:42:18.543386216Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2418 runtime=io.containerd.runc.v2\n" May 13 00:42:18.687261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba-rootfs.mount: Deactivated successfully. May 13 00:42:19.451929 kubelet[1911]: E0513 00:42:19.451895 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:19.453366 env[1200]: time="2025-05-13T00:42:19.453318952Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:42:19.975930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063716176.mount: Deactivated successfully. May 13 00:42:19.981313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804304785.mount: Deactivated successfully. May 13 00:42:19.991397 env[1200]: time="2025-05-13T00:42:19.991340720Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\"" May 13 00:42:19.991944 env[1200]: time="2025-05-13T00:42:19.991885301Z" level=info msg="StartContainer for \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\"" May 13 00:42:20.007637 systemd[1]: Started cri-containerd-761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba.scope. May 13 00:42:20.034097 env[1200]: time="2025-05-13T00:42:20.034055517Z" level=info msg="StartContainer for \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\" returns successfully" May 13 00:42:20.034564 systemd[1]: cri-containerd-761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba.scope: Deactivated successfully. May 13 00:42:20.056627 env[1200]: time="2025-05-13T00:42:20.056582573Z" level=info msg="shim disconnected" id=761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba May 13 00:42:20.056627 env[1200]: time="2025-05-13T00:42:20.056628005Z" level=warning msg="cleaning up after shim disconnected" id=761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba namespace=k8s.io May 13 00:42:20.056812 env[1200]: time="2025-05-13T00:42:20.056637534Z" level=info msg="cleaning up dead shim" May 13 00:42:20.062927 env[1200]: time="2025-05-13T00:42:20.062879254Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2474 runtime=io.containerd.runc.v2\n" May 13 00:42:20.455481 kubelet[1911]: E0513 00:42:20.455395 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:20.459654 env[1200]: time="2025-05-13T00:42:20.459603127Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:42:20.473200 env[1200]: time="2025-05-13T00:42:20.473149608Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\"" May 13 00:42:20.473932 env[1200]: time="2025-05-13T00:42:20.473895119Z" level=info msg="StartContainer for \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\"" May 13 00:42:20.487861 systemd[1]: Started cri-containerd-73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08.scope. May 13 00:42:20.511627 systemd[1]: cri-containerd-73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08.scope: Deactivated successfully. May 13 00:42:20.512473 env[1200]: time="2025-05-13T00:42:20.512386485Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35319098_239c_4ce5_987f_741bdd6be0b5.slice/cri-containerd-73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08.scope/memory.events\": no such file or directory" May 13 00:42:20.515432 env[1200]: time="2025-05-13T00:42:20.515403100Z" level=info msg="StartContainer for \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\" returns successfully" May 13 00:42:20.854690 env[1200]: time="2025-05-13T00:42:20.854646205Z" level=info msg="shim disconnected" id=73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08 May 13 00:42:20.854690 env[1200]: time="2025-05-13T00:42:20.854684943Z" level=warning msg="cleaning up after shim disconnected" id=73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08 namespace=k8s.io May 13 00:42:20.854690 env[1200]: time="2025-05-13T00:42:20.854693291Z" level=info msg="cleaning up dead shim" May 13 00:42:20.860626 env[1200]: time="2025-05-13T00:42:20.860588121Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2528 runtime=io.containerd.runc.v2\n" May 13 00:42:20.973387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba-rootfs.mount: Deactivated successfully. May 13 00:42:20.998174 env[1200]: time="2025-05-13T00:42:20.998118046Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:21.000918 env[1200]: time="2025-05-13T00:42:21.000884066Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:21.001799 env[1200]: time="2025-05-13T00:42:21.001735666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:21.002160 env[1200]: time="2025-05-13T00:42:21.002127673Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:42:21.004453 env[1200]: time="2025-05-13T00:42:21.004362461Z" level=info msg="CreateContainer within sandbox \"8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:42:21.015823 env[1200]: time="2025-05-13T00:42:21.015767256Z" level=info msg="CreateContainer within sandbox \"8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\"" May 13 00:42:21.016246 env[1200]: time="2025-05-13T00:42:21.016208562Z" level=info msg="StartContainer for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\"" May 13 00:42:21.031402 systemd[1]: Started cri-containerd-a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64.scope. May 13 00:42:21.055159 env[1200]: time="2025-05-13T00:42:21.055110990Z" level=info msg="StartContainer for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" returns successfully" May 13 00:42:21.458661 kubelet[1911]: E0513 00:42:21.458622 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:21.467401 kubelet[1911]: E0513 00:42:21.467368 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:21.469117 env[1200]: time="2025-05-13T00:42:21.468852191Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:42:21.473002 kubelet[1911]: I0513 00:42:21.472944 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zxmlh" podStartSLOduration=1.7540584959999999 podStartE2EDuration="18.472927264s" podCreationTimestamp="2025-05-13 00:42:03 +0000 UTC" firstStartedPulling="2025-05-13 00:42:04.284124173 +0000 UTC m=+6.957666074" lastFinishedPulling="2025-05-13 00:42:21.002992941 +0000 UTC m=+23.676534842" observedRunningTime="2025-05-13 00:42:21.472503464 +0000 UTC m=+24.146045365" watchObservedRunningTime="2025-05-13 00:42:21.472927264 +0000 UTC m=+24.146469165" May 13 00:42:21.485006 env[1200]: time="2025-05-13T00:42:21.484932174Z" level=info msg="CreateContainer within sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\"" May 13 00:42:21.485639 env[1200]: time="2025-05-13T00:42:21.485604504Z" level=info msg="StartContainer for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\"" May 13 00:42:21.503129 systemd[1]: Started cri-containerd-738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b.scope. May 13 00:42:21.598124 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:58848.service. May 13 00:42:21.625298 env[1200]: time="2025-05-13T00:42:21.625236024Z" level=info msg="StartContainer for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" returns successfully" May 13 00:42:21.664284 sshd[2616]: Accepted publickey for core from 10.0.0.1 port 58848 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:21.664864 sshd[2616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:21.669666 systemd[1]: Started session-6.scope. May 13 00:42:21.671484 systemd-logind[1195]: New session 6 of user core. May 13 00:42:21.753732 kubelet[1911]: I0513 00:42:21.753483 1911 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:42:21.795904 systemd[1]: Created slice kubepods-burstable-pod90c17cd6_232a_4b0c_8f76_fea8b306e1de.slice. May 13 00:42:21.801230 systemd[1]: Created slice kubepods-burstable-pod3a7ba210_1721_4f64_82a5_6a5b9eed288d.slice. May 13 00:42:21.858036 kubelet[1911]: I0513 00:42:21.857981 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmdt\" (UniqueName: \"kubernetes.io/projected/3a7ba210-1721-4f64-82a5-6a5b9eed288d-kube-api-access-spmdt\") pod \"coredns-668d6bf9bc-vf5bc\" (UID: \"3a7ba210-1721-4f64-82a5-6a5b9eed288d\") " pod="kube-system/coredns-668d6bf9bc-vf5bc" May 13 00:42:21.858036 kubelet[1911]: I0513 00:42:21.858032 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldw6w\" (UniqueName: \"kubernetes.io/projected/90c17cd6-232a-4b0c-8f76-fea8b306e1de-kube-api-access-ldw6w\") pod \"coredns-668d6bf9bc-6bk5d\" (UID: \"90c17cd6-232a-4b0c-8f76-fea8b306e1de\") " pod="kube-system/coredns-668d6bf9bc-6bk5d" May 13 00:42:21.858036 kubelet[1911]: I0513 00:42:21.858048 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a7ba210-1721-4f64-82a5-6a5b9eed288d-config-volume\") pod \"coredns-668d6bf9bc-vf5bc\" (UID: \"3a7ba210-1721-4f64-82a5-6a5b9eed288d\") " pod="kube-system/coredns-668d6bf9bc-vf5bc" May 13 00:42:21.858240 kubelet[1911]: I0513 00:42:21.858063 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90c17cd6-232a-4b0c-8f76-fea8b306e1de-config-volume\") pod \"coredns-668d6bf9bc-6bk5d\" (UID: \"90c17cd6-232a-4b0c-8f76-fea8b306e1de\") " pod="kube-system/coredns-668d6bf9bc-6bk5d" May 13 00:42:21.884502 sshd[2616]: pam_unix(sshd:session): session closed for user core May 13 00:42:21.887248 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:58848.service: Deactivated successfully. May 13 00:42:21.887891 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:42:21.888580 systemd-logind[1195]: Session 6 logged out. Waiting for processes to exit. May 13 00:42:21.889391 systemd-logind[1195]: Removed session 6. May 13 00:42:22.100981 kubelet[1911]: E0513 00:42:22.100954 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.103930 kubelet[1911]: E0513 00:42:22.103903 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.104092 env[1200]: time="2025-05-13T00:42:22.104049742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6bk5d,Uid:90c17cd6-232a-4b0c-8f76-fea8b306e1de,Namespace:kube-system,Attempt:0,}" May 13 00:42:22.104427 env[1200]: time="2025-05-13T00:42:22.104385083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf5bc,Uid:3a7ba210-1721-4f64-82a5-6a5b9eed288d,Namespace:kube-system,Attempt:0,}" May 13 00:42:22.472053 kubelet[1911]: E0513 00:42:22.471943 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.472338 kubelet[1911]: E0513 00:42:22.472140 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.488213 kubelet[1911]: I0513 00:42:22.488154 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-49qzn" podStartSLOduration=7.031198609 podStartE2EDuration="20.488133669s" podCreationTimestamp="2025-05-13 00:42:02 +0000 UTC" firstStartedPulling="2025-05-13 00:42:04.219754282 +0000 UTC m=+6.893296183" lastFinishedPulling="2025-05-13 00:42:17.676689341 +0000 UTC m=+20.350231243" observedRunningTime="2025-05-13 00:42:22.487871885 +0000 UTC m=+25.161413786" watchObservedRunningTime="2025-05-13 00:42:22.488133669 +0000 UTC m=+25.161675570" May 13 00:42:23.474195 kubelet[1911]: E0513 00:42:23.474156 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:23.541014 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:42:23.540401 systemd-networkd[1025]: cilium_host: Link UP May 13 00:42:23.540537 systemd-networkd[1025]: cilium_net: Link UP May 13 00:42:23.540540 systemd-networkd[1025]: cilium_net: Gained carrier May 13 00:42:23.540663 systemd-networkd[1025]: cilium_host: Gained carrier May 13 00:42:23.540885 systemd-networkd[1025]: cilium_host: Gained IPv6LL May 13 00:42:23.615785 systemd-networkd[1025]: cilium_vxlan: Link UP May 13 00:42:23.615793 systemd-networkd[1025]: cilium_vxlan: Gained carrier May 13 00:42:23.806865 kernel: NET: Registered PF_ALG protocol family May 13 00:42:24.138949 systemd-networkd[1025]: cilium_net: Gained IPv6LL May 13 00:42:24.320253 systemd-networkd[1025]: lxc_health: Link UP May 13 00:42:24.333176 systemd-networkd[1025]: lxc_health: Gained carrier May 13 00:42:24.334019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:42:24.485444 kubelet[1911]: E0513 00:42:24.485291 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:24.687202 systemd-networkd[1025]: lxc9f48fe234030: Link UP May 13 00:42:24.692414 systemd-networkd[1025]: lxc22780fd7038d: Link UP May 13 00:42:24.705901 kernel: eth0: renamed from tmpad8c4 May 13 00:42:24.713021 systemd-networkd[1025]: lxc9f48fe234030: Gained carrier May 13 00:42:24.713868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9f48fe234030: link becomes ready May 13 00:42:24.715848 kernel: eth0: renamed from tmp20052 May 13 00:42:24.728194 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:42:24.728334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc22780fd7038d: link becomes ready May 13 00:42:24.727891 systemd-networkd[1025]: lxc22780fd7038d: Gained carrier May 13 00:42:24.779059 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL May 13 00:42:25.481681 kubelet[1911]: E0513 00:42:25.481640 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:26.190386 systemd-networkd[1025]: lxc_health: Gained IPv6LL May 13 00:42:26.190695 systemd-networkd[1025]: lxc9f48fe234030: Gained IPv6LL May 13 00:42:26.315040 systemd-networkd[1025]: lxc22780fd7038d: Gained IPv6LL May 13 00:42:26.483400 kubelet[1911]: E0513 00:42:26.483299 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:26.889990 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:37298.service. May 13 00:42:26.921885 sshd[3131]: Accepted publickey for core from 10.0.0.1 port 37298 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:26.922994 sshd[3131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:26.926924 systemd-logind[1195]: New session 7 of user core. May 13 00:42:26.927850 systemd[1]: Started session-7.scope. May 13 00:42:27.046671 sshd[3131]: pam_unix(sshd:session): session closed for user core May 13 00:42:27.049268 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:37298.service: Deactivated successfully. May 13 00:42:27.049963 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:42:27.050932 systemd-logind[1195]: Session 7 logged out. Waiting for processes to exit. May 13 00:42:27.051734 systemd-logind[1195]: Removed session 7. May 13 00:42:27.485145 kubelet[1911]: E0513 00:42:27.485110 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.072830 env[1200]: time="2025-05-13T00:42:28.072755400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:28.072830 env[1200]: time="2025-05-13T00:42:28.072786602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:28.072830 env[1200]: time="2025-05-13T00:42:28.072795970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:28.073265 env[1200]: time="2025-05-13T00:42:28.072893233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad8c4714e5db44a7aa540716b2b86f5b8ca1f55b0df520384f10cb7734d0d82d pid=3160 runtime=io.containerd.runc.v2 May 13 00:42:28.084688 systemd[1]: run-containerd-runc-k8s.io-ad8c4714e5db44a7aa540716b2b86f5b8ca1f55b0df520384f10cb7734d0d82d-runc.Xp6R6X.mount: Deactivated successfully. May 13 00:42:28.089982 systemd[1]: Started cri-containerd-ad8c4714e5db44a7aa540716b2b86f5b8ca1f55b0df520384f10cb7734d0d82d.scope. May 13 00:42:28.100755 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:28.111379 env[1200]: time="2025-05-13T00:42:28.111314357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:28.111610 env[1200]: time="2025-05-13T00:42:28.111575845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:28.111812 env[1200]: time="2025-05-13T00:42:28.111772343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:28.112129 env[1200]: time="2025-05-13T00:42:28.112049411Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20052c7afa305aca91fc9d8ad0fdc24b945a359e95a467fb558eed99afa3384f pid=3192 runtime=io.containerd.runc.v2 May 13 00:42:28.123092 systemd[1]: Started cri-containerd-20052c7afa305aca91fc9d8ad0fdc24b945a359e95a467fb558eed99afa3384f.scope. May 13 00:42:28.131846 env[1200]: time="2025-05-13T00:42:28.129193572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6bk5d,Uid:90c17cd6-232a-4b0c-8f76-fea8b306e1de,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad8c4714e5db44a7aa540716b2b86f5b8ca1f55b0df520384f10cb7734d0d82d\"" May 13 00:42:28.131846 env[1200]: time="2025-05-13T00:42:28.131623558Z" level=info msg="CreateContainer within sandbox \"ad8c4714e5db44a7aa540716b2b86f5b8ca1f55b0df520384f10cb7734d0d82d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:28.132030 kubelet[1911]: E0513 00:42:28.129781 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.136762 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:28.152176 env[1200]: time="2025-05-13T00:42:28.151398081Z" level=info msg="CreateContainer within sandbox \"ad8c4714e5db44a7aa540716b2b86f5b8ca1f55b0df520384f10cb7734d0d82d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5013262898de3db989a37a3237dc18345ef5c54de800f0e99b55d64aa2f4d17\"" May 13 00:42:28.152176 env[1200]: time="2025-05-13T00:42:28.151853551Z" level=info msg="StartContainer for \"c5013262898de3db989a37a3237dc18345ef5c54de800f0e99b55d64aa2f4d17\"" May 13 00:42:28.162154 env[1200]: time="2025-05-13T00:42:28.162100119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf5bc,Uid:3a7ba210-1721-4f64-82a5-6a5b9eed288d,Namespace:kube-system,Attempt:0,} returns sandbox id \"20052c7afa305aca91fc9d8ad0fdc24b945a359e95a467fb558eed99afa3384f\"" May 13 00:42:28.162757 kubelet[1911]: E0513 00:42:28.162723 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.165144 env[1200]: time="2025-05-13T00:42:28.165103009Z" level=info msg="CreateContainer within sandbox \"20052c7afa305aca91fc9d8ad0fdc24b945a359e95a467fb558eed99afa3384f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:28.173080 systemd[1]: Started cri-containerd-c5013262898de3db989a37a3237dc18345ef5c54de800f0e99b55d64aa2f4d17.scope. May 13 00:42:28.184479 env[1200]: time="2025-05-13T00:42:28.184432771Z" level=info msg="CreateContainer within sandbox \"20052c7afa305aca91fc9d8ad0fdc24b945a359e95a467fb558eed99afa3384f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"350f57ae5cfdd47ae698dc715fad880a1085c490964e41046b0ab1ede73c5b27\"" May 13 00:42:28.185183 env[1200]: time="2025-05-13T00:42:28.185154158Z" level=info msg="StartContainer for \"350f57ae5cfdd47ae698dc715fad880a1085c490964e41046b0ab1ede73c5b27\"" May 13 00:42:28.199175 env[1200]: time="2025-05-13T00:42:28.198776774Z" level=info msg="StartContainer for \"c5013262898de3db989a37a3237dc18345ef5c54de800f0e99b55d64aa2f4d17\" returns successfully" May 13 00:42:28.198999 systemd[1]: Started cri-containerd-350f57ae5cfdd47ae698dc715fad880a1085c490964e41046b0ab1ede73c5b27.scope. May 13 00:42:28.229780 env[1200]: time="2025-05-13T00:42:28.229733415Z" level=info msg="StartContainer for \"350f57ae5cfdd47ae698dc715fad880a1085c490964e41046b0ab1ede73c5b27\" returns successfully" May 13 00:42:28.487569 kubelet[1911]: E0513 00:42:28.487448 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.489430 kubelet[1911]: E0513 00:42:28.489401 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.512380 kubelet[1911]: I0513 00:42:28.512078 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6bk5d" podStartSLOduration=25.512057167000002 podStartE2EDuration="25.512057167s" podCreationTimestamp="2025-05-13 00:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:28.511365709 +0000 UTC m=+31.184907610" watchObservedRunningTime="2025-05-13 00:42:28.512057167 +0000 UTC m=+31.185599068" May 13 00:42:28.512380 kubelet[1911]: I0513 00:42:28.512186 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vf5bc" podStartSLOduration=25.51218027 podStartE2EDuration="25.51218027s" podCreationTimestamp="2025-05-13 00:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:28.500518755 +0000 UTC m=+31.174060686" watchObservedRunningTime="2025-05-13 00:42:28.51218027 +0000 UTC m=+31.185722181" May 13 00:42:29.490869 kubelet[1911]: E0513 00:42:29.490820 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:29.490869 kubelet[1911]: E0513 00:42:29.490869 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:30.492749 kubelet[1911]: E0513 00:42:30.492698 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:30.492749 kubelet[1911]: E0513 00:42:30.492724 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:32.051189 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:37310.service. May 13 00:42:32.087778 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:32.089204 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:32.093708 systemd-logind[1195]: New session 8 of user core. May 13 00:42:32.094563 systemd[1]: Started session-8.scope. May 13 00:42:32.218155 sshd[3320]: pam_unix(sshd:session): session closed for user core May 13 00:42:32.220176 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:37310.service: Deactivated successfully. May 13 00:42:32.220988 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:42:32.221860 systemd-logind[1195]: Session 8 logged out. Waiting for processes to exit. May 13 00:42:32.222547 systemd-logind[1195]: Removed session 8. May 13 00:42:37.222635 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:56892.service. May 13 00:42:37.253349 sshd[3337]: Accepted publickey for core from 10.0.0.1 port 56892 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:37.254747 sshd[3337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:37.258753 systemd-logind[1195]: New session 9 of user core. May 13 00:42:37.259423 systemd[1]: Started session-9.scope. May 13 00:42:37.364458 sshd[3337]: pam_unix(sshd:session): session closed for user core May 13 00:42:37.366863 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:56892.service: Deactivated successfully. May 13 00:42:37.367613 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:42:37.368263 systemd-logind[1195]: Session 9 logged out. Waiting for processes to exit. May 13 00:42:37.369081 systemd-logind[1195]: Removed session 9. May 13 00:42:42.368299 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:56896.service. May 13 00:42:42.396311 sshd[3351]: Accepted publickey for core from 10.0.0.1 port 56896 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:42.397178 sshd[3351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:42.400024 systemd-logind[1195]: New session 10 of user core. May 13 00:42:42.400746 systemd[1]: Started session-10.scope. May 13 00:42:42.529430 sshd[3351]: pam_unix(sshd:session): session closed for user core May 13 00:42:42.532745 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:56896.service: Deactivated successfully. May 13 00:42:42.533316 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:42:42.533882 systemd-logind[1195]: Session 10 logged out. Waiting for processes to exit. May 13 00:42:42.535417 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:56898.service. May 13 00:42:42.536370 systemd-logind[1195]: Removed session 10. May 13 00:42:42.566085 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 56898 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:42.567230 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:42.570863 systemd-logind[1195]: New session 11 of user core. May 13 00:42:42.571527 systemd[1]: Started session-11.scope. May 13 00:42:42.711706 sshd[3366]: pam_unix(sshd:session): session closed for user core May 13 00:42:42.716289 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:56914.service. May 13 00:42:42.717139 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:56898.service: Deactivated successfully. May 13 00:42:42.718994 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:42:42.719767 systemd-logind[1195]: Session 11 logged out. Waiting for processes to exit. May 13 00:42:42.723898 systemd-logind[1195]: Removed session 11. May 13 00:42:42.753279 sshd[3376]: Accepted publickey for core from 10.0.0.1 port 56914 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:42.754436 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:42.758852 systemd-logind[1195]: New session 12 of user core. May 13 00:42:42.759839 systemd[1]: Started session-12.scope. May 13 00:42:42.879876 sshd[3376]: pam_unix(sshd:session): session closed for user core May 13 00:42:42.882760 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:56914.service: Deactivated successfully. May 13 00:42:42.883538 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:42:42.884121 systemd-logind[1195]: Session 12 logged out. Waiting for processes to exit. May 13 00:42:42.884828 systemd-logind[1195]: Removed session 12. May 13 00:42:47.884420 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:51300.service. May 13 00:42:47.911838 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 51300 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:47.912736 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:47.915989 systemd-logind[1195]: New session 13 of user core. May 13 00:42:47.917032 systemd[1]: Started session-13.scope. May 13 00:42:48.031907 sshd[3390]: pam_unix(sshd:session): session closed for user core May 13 00:42:48.034304 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:51300.service: Deactivated successfully. May 13 00:42:48.034964 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:42:48.035709 systemd-logind[1195]: Session 13 logged out. Waiting for processes to exit. May 13 00:42:48.036386 systemd-logind[1195]: Removed session 13. May 13 00:42:53.036144 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:51302.service. May 13 00:42:53.066416 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 51302 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:53.067376 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:53.070556 systemd-logind[1195]: New session 14 of user core. May 13 00:42:53.071339 systemd[1]: Started session-14.scope. May 13 00:42:53.167145 sshd[3406]: pam_unix(sshd:session): session closed for user core May 13 00:42:53.169091 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:51302.service: Deactivated successfully. May 13 00:42:53.169835 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:42:53.170633 systemd-logind[1195]: Session 14 logged out. Waiting for processes to exit. May 13 00:42:53.171354 systemd-logind[1195]: Removed session 14. May 13 00:42:58.172221 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:38838.service. May 13 00:42:58.202212 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:58.203419 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:58.206653 systemd-logind[1195]: New session 15 of user core. May 13 00:42:58.207573 systemd[1]: Started session-15.scope. May 13 00:42:58.311532 sshd[3423]: pam_unix(sshd:session): session closed for user core May 13 00:42:58.314384 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:38838.service: Deactivated successfully. May 13 00:42:58.314918 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:42:58.315550 systemd-logind[1195]: Session 15 logged out. Waiting for processes to exit. May 13 00:42:58.316618 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:38848.service. May 13 00:42:58.317424 systemd-logind[1195]: Removed session 15. May 13 00:42:58.345267 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 38848 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:58.346138 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:58.349690 systemd-logind[1195]: New session 16 of user core. May 13 00:42:58.350619 systemd[1]: Started session-16.scope. May 13 00:42:58.751243 sshd[3436]: pam_unix(sshd:session): session closed for user core May 13 00:42:58.753967 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:38848.service: Deactivated successfully. May 13 00:42:58.754435 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:42:58.754939 systemd-logind[1195]: Session 16 logged out. Waiting for processes to exit. May 13 00:42:58.755936 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:38864.service. May 13 00:42:58.756758 systemd-logind[1195]: Removed session 16. May 13 00:42:58.785136 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 38864 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:58.786409 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:58.789964 systemd-logind[1195]: New session 17 of user core. May 13 00:42:58.790874 systemd[1]: Started session-17.scope. May 13 00:42:59.669177 sshd[3447]: pam_unix(sshd:session): session closed for user core May 13 00:42:59.671873 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:38876.service. May 13 00:42:59.672262 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:38864.service: Deactivated successfully. May 13 00:42:59.672740 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:42:59.673556 systemd-logind[1195]: Session 17 logged out. Waiting for processes to exit. May 13 00:42:59.674455 systemd-logind[1195]: Removed session 17. May 13 00:42:59.701576 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 38876 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:59.702741 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:59.705793 systemd-logind[1195]: New session 18 of user core. May 13 00:42:59.706636 systemd[1]: Started session-18.scope. May 13 00:42:59.916280 sshd[3463]: pam_unix(sshd:session): session closed for user core May 13 00:42:59.918747 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:38886.service. May 13 00:42:59.919155 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:38876.service: Deactivated successfully. May 13 00:42:59.919663 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:42:59.920335 systemd-logind[1195]: Session 18 logged out. Waiting for processes to exit. May 13 00:42:59.921285 systemd-logind[1195]: Removed session 18. May 13 00:42:59.948707 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 38886 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:42:59.949653 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:59.952662 systemd-logind[1195]: New session 19 of user core. May 13 00:42:59.953606 systemd[1]: Started session-19.scope. May 13 00:43:00.056876 sshd[3476]: pam_unix(sshd:session): session closed for user core May 13 00:43:00.058897 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:38886.service: Deactivated successfully. May 13 00:43:00.059564 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:43:00.060082 systemd-logind[1195]: Session 19 logged out. Waiting for processes to exit. May 13 00:43:00.060764 systemd-logind[1195]: Removed session 19. May 13 00:43:05.061391 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:57780.service. May 13 00:43:05.091727 sshd[3492]: Accepted publickey for core from 10.0.0.1 port 57780 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:05.092707 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:05.096039 systemd-logind[1195]: New session 20 of user core. May 13 00:43:05.096765 systemd[1]: Started session-20.scope. May 13 00:43:05.204097 sshd[3492]: pam_unix(sshd:session): session closed for user core May 13 00:43:05.206007 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:57780.service: Deactivated successfully. May 13 00:43:05.206656 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:43:05.207189 systemd-logind[1195]: Session 20 logged out. Waiting for processes to exit. May 13 00:43:05.208005 systemd-logind[1195]: Removed session 20. May 13 00:43:10.208815 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:57784.service. May 13 00:43:10.310043 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:10.311170 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:10.314524 systemd-logind[1195]: New session 21 of user core. May 13 00:43:10.315291 systemd[1]: Started session-21.scope. May 13 00:43:10.417533 sshd[3507]: pam_unix(sshd:session): session closed for user core May 13 00:43:10.419755 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:57784.service: Deactivated successfully. May 13 00:43:10.420441 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:43:10.421060 systemd-logind[1195]: Session 21 logged out. Waiting for processes to exit. May 13 00:43:10.421645 systemd-logind[1195]: Removed session 21. May 13 00:43:15.420955 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:56438.service. May 13 00:43:15.448867 sshd[3521]: Accepted publickey for core from 10.0.0.1 port 56438 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:15.449796 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:15.452590 systemd-logind[1195]: New session 22 of user core. May 13 00:43:15.453274 systemd[1]: Started session-22.scope. May 13 00:43:15.550986 sshd[3521]: pam_unix(sshd:session): session closed for user core May 13 00:43:15.552977 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:56438.service: Deactivated successfully. May 13 00:43:15.553614 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:43:15.554124 systemd-logind[1195]: Session 22 logged out. Waiting for processes to exit. May 13 00:43:15.554822 systemd-logind[1195]: Removed session 22. May 13 00:43:20.554688 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:56444.service. May 13 00:43:20.584450 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 56444 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:20.585375 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:20.588440 systemd-logind[1195]: New session 23 of user core. May 13 00:43:20.589181 systemd[1]: Started session-23.scope. May 13 00:43:20.702755 sshd[3534]: pam_unix(sshd:session): session closed for user core May 13 00:43:20.705456 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:56444.service: Deactivated successfully. May 13 00:43:20.705993 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:43:20.706539 systemd-logind[1195]: Session 23 logged out. Waiting for processes to exit. May 13 00:43:20.707449 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:56456.service. May 13 00:43:20.708438 systemd-logind[1195]: Removed session 23. May 13 00:43:20.735667 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:20.736683 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:20.739989 systemd-logind[1195]: New session 24 of user core. May 13 00:43:20.740784 systemd[1]: Started session-24.scope. May 13 00:43:22.161180 env[1200]: time="2025-05-13T00:43:22.161110876Z" level=info msg="StopContainer for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" with timeout 30 (s)" May 13 00:43:22.161545 env[1200]: time="2025-05-13T00:43:22.161517839Z" level=info msg="Stop container \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" with signal terminated" May 13 00:43:22.173170 systemd[1]: cri-containerd-a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64.scope: Deactivated successfully. May 13 00:43:22.180577 env[1200]: time="2025-05-13T00:43:22.180525449Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:43:22.185559 env[1200]: time="2025-05-13T00:43:22.185514108Z" level=info msg="StopContainer for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" with timeout 2 (s)" May 13 00:43:22.185783 env[1200]: time="2025-05-13T00:43:22.185692578Z" level=info msg="Stop container \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" with signal terminated" May 13 00:43:22.191540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64-rootfs.mount: Deactivated successfully. May 13 00:43:22.193272 systemd-networkd[1025]: lxc_health: Link DOWN May 13 00:43:22.193277 systemd-networkd[1025]: lxc_health: Lost carrier May 13 00:43:22.201151 env[1200]: time="2025-05-13T00:43:22.201093403Z" level=info msg="shim disconnected" id=a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64 May 13 00:43:22.201380 env[1200]: time="2025-05-13T00:43:22.201157465Z" level=warning msg="cleaning up after shim disconnected" id=a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64 namespace=k8s.io May 13 00:43:22.201380 env[1200]: time="2025-05-13T00:43:22.201167424Z" level=info msg="cleaning up dead shim" May 13 00:43:22.207575 env[1200]: time="2025-05-13T00:43:22.207529853Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3601 runtime=io.containerd.runc.v2\n" May 13 00:43:22.209791 env[1200]: time="2025-05-13T00:43:22.209753630Z" level=info msg="StopContainer for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" returns successfully" May 13 00:43:22.210514 env[1200]: time="2025-05-13T00:43:22.210465502Z" level=info msg="StopPodSandbox for \"8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb\"" May 13 00:43:22.210639 env[1200]: time="2025-05-13T00:43:22.210544953Z" level=info msg="Container to stop \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:22.212665 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb-shm.mount: Deactivated successfully. May 13 00:43:22.218004 systemd[1]: cri-containerd-8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb.scope: Deactivated successfully. May 13 00:43:22.218881 systemd[1]: cri-containerd-738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b.scope: Deactivated successfully. May 13 00:43:22.219071 systemd[1]: cri-containerd-738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b.scope: Consumed 6.054s CPU time. May 13 00:43:22.236233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b-rootfs.mount: Deactivated successfully. May 13 00:43:22.240475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb-rootfs.mount: Deactivated successfully. May 13 00:43:22.243417 env[1200]: time="2025-05-13T00:43:22.243367163Z" level=info msg="shim disconnected" id=738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b May 13 00:43:22.243417 env[1200]: time="2025-05-13T00:43:22.243410315Z" level=warning msg="cleaning up after shim disconnected" id=738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b namespace=k8s.io May 13 00:43:22.243417 env[1200]: time="2025-05-13T00:43:22.243418781Z" level=info msg="cleaning up dead shim" May 13 00:43:22.243656 env[1200]: time="2025-05-13T00:43:22.243617810Z" level=info msg="shim disconnected" id=8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb May 13 00:43:22.243656 env[1200]: time="2025-05-13T00:43:22.243655151Z" level=warning msg="cleaning up after shim disconnected" id=8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb namespace=k8s.io May 13 00:43:22.243870 env[1200]: time="2025-05-13T00:43:22.243665720Z" level=info msg="cleaning up dead shim" May 13 00:43:22.249702 env[1200]: time="2025-05-13T00:43:22.249658699Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3647 runtime=io.containerd.runc.v2\n" May 13 00:43:22.250059 env[1200]: time="2025-05-13T00:43:22.250033722Z" level=info msg="TearDown network for sandbox \"8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb\" successfully" May 13 00:43:22.250059 env[1200]: time="2025-05-13T00:43:22.250057928Z" level=info msg="StopPodSandbox for \"8c6919393c24f6a0271dc8ba2c4cd7fdbcf27a6c123ba622753df4adec588abb\" returns successfully" May 13 00:43:22.250473 env[1200]: time="2025-05-13T00:43:22.250436387Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3646 runtime=io.containerd.runc.v2\n" May 13 00:43:22.252914 env[1200]: time="2025-05-13T00:43:22.252882826Z" level=info msg="StopContainer for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" returns successfully" May 13 00:43:22.253235 env[1200]: time="2025-05-13T00:43:22.253208825Z" level=info msg="StopPodSandbox for \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\"" May 13 00:43:22.253297 env[1200]: time="2025-05-13T00:43:22.253275392Z" level=info msg="Container to stop \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:22.253337 env[1200]: time="2025-05-13T00:43:22.253297894Z" level=info msg="Container to stop \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:22.253337 env[1200]: time="2025-05-13T00:43:22.253312222Z" level=info msg="Container to stop \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:22.253337 env[1200]: time="2025-05-13T00:43:22.253329144Z" level=info msg="Container to stop \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:22.253461 env[1200]: time="2025-05-13T00:43:22.253342098Z" level=info msg="Container to stop \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:22.259027 systemd[1]: cri-containerd-e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a.scope: Deactivated successfully. May 13 00:43:22.307678 kubelet[1911]: I0513 00:43:22.307636 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bb07763-a021-4348-ace4-06a913442246-cilium-config-path\") pod \"1bb07763-a021-4348-ace4-06a913442246\" (UID: \"1bb07763-a021-4348-ace4-06a913442246\") " May 13 00:43:22.307678 kubelet[1911]: I0513 00:43:22.307686 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpq2m\" (UniqueName: \"kubernetes.io/projected/1bb07763-a021-4348-ace4-06a913442246-kube-api-access-xpq2m\") pod \"1bb07763-a021-4348-ace4-06a913442246\" (UID: \"1bb07763-a021-4348-ace4-06a913442246\") " May 13 00:43:22.309588 kubelet[1911]: I0513 00:43:22.309563 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bb07763-a021-4348-ace4-06a913442246-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1bb07763-a021-4348-ace4-06a913442246" (UID: "1bb07763-a021-4348-ace4-06a913442246"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:43:22.310279 kubelet[1911]: I0513 00:43:22.310249 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb07763-a021-4348-ace4-06a913442246-kube-api-access-xpq2m" (OuterVolumeSpecName: "kube-api-access-xpq2m") pod "1bb07763-a021-4348-ace4-06a913442246" (UID: "1bb07763-a021-4348-ace4-06a913442246"). InnerVolumeSpecName "kube-api-access-xpq2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:43:22.334383 env[1200]: time="2025-05-13T00:43:22.334307122Z" level=info msg="shim disconnected" id=e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a May 13 00:43:22.334383 env[1200]: time="2025-05-13T00:43:22.334367096Z" level=warning msg="cleaning up after shim disconnected" id=e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a namespace=k8s.io May 13 00:43:22.334383 env[1200]: time="2025-05-13T00:43:22.334378077Z" level=info msg="cleaning up dead shim" May 13 00:43:22.341512 env[1200]: time="2025-05-13T00:43:22.341449785Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3689 runtime=io.containerd.runc.v2\n" May 13 00:43:22.341836 env[1200]: time="2025-05-13T00:43:22.341796695Z" level=info msg="TearDown network for sandbox \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" successfully" May 13 00:43:22.341882 env[1200]: time="2025-05-13T00:43:22.341835477Z" level=info msg="StopPodSandbox for \"e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a\" returns successfully" May 13 00:43:22.408271 kubelet[1911]: I0513 00:43:22.408231 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-xtables-lock\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408271 kubelet[1911]: I0513 00:43:22.408270 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-cgroup\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408478 kubelet[1911]: I0513 00:43:22.408295 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-hubble-tls\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408478 kubelet[1911]: I0513 00:43:22.408307 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-run\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408478 kubelet[1911]: I0513 00:43:22.408322 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35319098-239c-4ce5-987f-741bdd6be0b5-clustermesh-secrets\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408478 kubelet[1911]: I0513 00:43:22.408336 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cni-path\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408478 kubelet[1911]: I0513 00:43:22.408347 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-hostproc\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408478 kubelet[1911]: I0513 00:43:22.408357 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-bpf-maps\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408614 kubelet[1911]: I0513 00:43:22.408369 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-lib-modules\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408614 kubelet[1911]: I0513 00:43:22.408380 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-etc-cni-netd\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408614 kubelet[1911]: I0513 00:43:22.408398 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-config-path\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408614 kubelet[1911]: I0513 00:43:22.408386 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.408614 kubelet[1911]: I0513 00:43:22.408440 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.408733 kubelet[1911]: I0513 00:43:22.408410 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-kernel\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408733 kubelet[1911]: I0513 00:43:22.408467 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.408733 kubelet[1911]: I0513 00:43:22.408501 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4grcb\" (UniqueName: \"kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-kube-api-access-4grcb\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408733 kubelet[1911]: I0513 00:43:22.408520 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-net\") pod \"35319098-239c-4ce5-987f-741bdd6be0b5\" (UID: \"35319098-239c-4ce5-987f-741bdd6be0b5\") " May 13 00:43:22.408733 kubelet[1911]: I0513 00:43:22.408600 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bb07763-a021-4348-ace4-06a913442246-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.408733 kubelet[1911]: I0513 00:43:22.408609 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.408911 kubelet[1911]: I0513 00:43:22.408617 1911 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xpq2m\" (UniqueName: \"kubernetes.io/projected/1bb07763-a021-4348-ace4-06a913442246-kube-api-access-xpq2m\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.408911 kubelet[1911]: I0513 00:43:22.408625 1911 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.408911 kubelet[1911]: I0513 00:43:22.408633 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.408911 kubelet[1911]: I0513 00:43:22.408653 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.408911 kubelet[1911]: I0513 00:43:22.408772 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.408911 kubelet[1911]: I0513 00:43:22.408794 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.409060 kubelet[1911]: I0513 00:43:22.408823 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cni-path" (OuterVolumeSpecName: "cni-path") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.409060 kubelet[1911]: I0513 00:43:22.408838 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-hostproc" (OuterVolumeSpecName: "hostproc") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.409060 kubelet[1911]: I0513 00:43:22.408857 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.409060 kubelet[1911]: I0513 00:43:22.408869 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:22.410543 kubelet[1911]: I0513 00:43:22.410525 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:43:22.411008 kubelet[1911]: I0513 00:43:22.410985 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:43:22.411674 kubelet[1911]: I0513 00:43:22.411602 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-kube-api-access-4grcb" (OuterVolumeSpecName: "kube-api-access-4grcb") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "kube-api-access-4grcb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:43:22.412432 kubelet[1911]: I0513 00:43:22.412395 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35319098-239c-4ce5-987f-741bdd6be0b5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "35319098-239c-4ce5-987f-741bdd6be0b5" (UID: "35319098-239c-4ce5-987f-741bdd6be0b5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:43:22.462443 kubelet[1911]: E0513 00:43:22.462400 1911 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:22.509170 kubelet[1911]: I0513 00:43:22.509106 1911 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4grcb\" (UniqueName: \"kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-kube-api-access-4grcb\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509170 kubelet[1911]: I0513 00:43:22.509151 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509170 kubelet[1911]: I0513 00:43:22.509160 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509170 kubelet[1911]: I0513 00:43:22.509168 1911 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35319098-239c-4ce5-987f-741bdd6be0b5-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509170 kubelet[1911]: I0513 00:43:22.509175 1911 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35319098-239c-4ce5-987f-741bdd6be0b5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509170 kubelet[1911]: I0513 00:43:22.509183 1911 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509458 kubelet[1911]: I0513 00:43:22.509192 1911 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509458 kubelet[1911]: I0513 00:43:22.509199 1911 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509458 kubelet[1911]: I0513 00:43:22.509206 1911 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509458 kubelet[1911]: I0513 00:43:22.509213 1911 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35319098-239c-4ce5-987f-741bdd6be0b5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.509458 kubelet[1911]: I0513 00:43:22.509219 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35319098-239c-4ce5-987f-741bdd6be0b5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:22.579551 kubelet[1911]: I0513 00:43:22.579508 1911 scope.go:117] "RemoveContainer" containerID="a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64" May 13 00:43:22.580870 env[1200]: time="2025-05-13T00:43:22.580799138Z" level=info msg="RemoveContainer for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\"" May 13 00:43:22.583414 systemd[1]: Removed slice kubepods-besteffort-pod1bb07763_a021_4348_ace4_06a913442246.slice. May 13 00:43:22.585721 systemd[1]: Removed slice kubepods-burstable-pod35319098_239c_4ce5_987f_741bdd6be0b5.slice. May 13 00:43:22.585787 systemd[1]: kubepods-burstable-pod35319098_239c_4ce5_987f_741bdd6be0b5.slice: Consumed 6.141s CPU time. May 13 00:43:22.838124 env[1200]: time="2025-05-13T00:43:22.838065076Z" level=info msg="RemoveContainer for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" returns successfully" May 13 00:43:22.838440 kubelet[1911]: I0513 00:43:22.838403 1911 scope.go:117] "RemoveContainer" containerID="a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64" May 13 00:43:22.838831 env[1200]: time="2025-05-13T00:43:22.838722095Z" level=error msg="ContainerStatus for \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\": not found" May 13 00:43:22.839009 kubelet[1911]: E0513 00:43:22.838952 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\": not found" containerID="a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64" May 13 00:43:22.839064 kubelet[1911]: I0513 00:43:22.838989 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64"} err="failed to get container status \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\": rpc error: code = NotFound desc = an error occurred when try to find container \"a1509ded16cc4d5fca6306fe44789a3aa8015980f2688dca02d3a2bf14c48b64\": not found" May 13 00:43:22.839124 kubelet[1911]: I0513 00:43:22.839066 1911 scope.go:117] "RemoveContainer" containerID="738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b" May 13 00:43:22.840136 env[1200]: time="2025-05-13T00:43:22.840099011Z" level=info msg="RemoveContainer for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\"" May 13 00:43:22.924570 env[1200]: time="2025-05-13T00:43:22.924513169Z" level=info msg="RemoveContainer for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" returns successfully" May 13 00:43:22.924827 kubelet[1911]: I0513 00:43:22.924776 1911 scope.go:117] "RemoveContainer" containerID="73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08" May 13 00:43:22.925812 env[1200]: time="2025-05-13T00:43:22.925762704Z" level=info msg="RemoveContainer for \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\"" May 13 00:43:23.022691 env[1200]: time="2025-05-13T00:43:23.022635879Z" level=info msg="RemoveContainer for \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\" returns successfully" May 13 00:43:23.022947 kubelet[1911]: I0513 00:43:23.022919 1911 scope.go:117] "RemoveContainer" containerID="761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba" May 13 00:43:23.024132 env[1200]: time="2025-05-13T00:43:23.024086897Z" level=info msg="RemoveContainer for \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\"" May 13 00:43:23.099343 env[1200]: time="2025-05-13T00:43:23.099236406Z" level=info msg="RemoveContainer for \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\" returns successfully" May 13 00:43:23.099777 kubelet[1911]: I0513 00:43:23.099613 1911 scope.go:117] "RemoveContainer" containerID="376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030" May 13 00:43:23.101789 env[1200]: time="2025-05-13T00:43:23.101735546Z" level=info msg="RemoveContainer for \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\"" May 13 00:43:23.114147 env[1200]: time="2025-05-13T00:43:23.114079240Z" level=info msg="RemoveContainer for \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\" returns successfully" May 13 00:43:23.114394 kubelet[1911]: I0513 00:43:23.114371 1911 scope.go:117] "RemoveContainer" containerID="5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba" May 13 00:43:23.115563 env[1200]: time="2025-05-13T00:43:23.115533465Z" level=info msg="RemoveContainer for \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\"" May 13 00:43:23.120663 env[1200]: time="2025-05-13T00:43:23.120619042Z" level=info msg="RemoveContainer for \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\" returns successfully" May 13 00:43:23.120834 kubelet[1911]: I0513 00:43:23.120796 1911 scope.go:117] "RemoveContainer" containerID="738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b" May 13 00:43:23.121124 env[1200]: time="2025-05-13T00:43:23.121048919Z" level=error msg="ContainerStatus for \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\": not found" May 13 00:43:23.121268 kubelet[1911]: E0513 00:43:23.121243 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\": not found" containerID="738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b" May 13 00:43:23.121322 kubelet[1911]: I0513 00:43:23.121277 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b"} err="failed to get container status \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\": rpc error: code = NotFound desc = an error occurred when try to find container \"738cee90ea18907a9776f5ade78843c993b24476bffbbe54732bcc9360b4e28b\": not found" May 13 00:43:23.121322 kubelet[1911]: I0513 00:43:23.121306 1911 scope.go:117] "RemoveContainer" containerID="73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08" May 13 00:43:23.121517 env[1200]: time="2025-05-13T00:43:23.121464388Z" level=error msg="ContainerStatus for \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\": not found" May 13 00:43:23.121606 kubelet[1911]: E0513 00:43:23.121586 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\": not found" containerID="73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08" May 13 00:43:23.121656 kubelet[1911]: I0513 00:43:23.121607 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08"} err="failed to get container status \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\": rpc error: code = NotFound desc = an error occurred when try to find container \"73813d6949d23c9946c4ebdfb7d93afef02a438400ba90f1f879dc33c810de08\": not found" May 13 00:43:23.121656 kubelet[1911]: I0513 00:43:23.121619 1911 scope.go:117] "RemoveContainer" containerID="761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba" May 13 00:43:23.121816 env[1200]: time="2025-05-13T00:43:23.121760992Z" level=error msg="ContainerStatus for \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\": not found" May 13 00:43:23.121918 kubelet[1911]: E0513 00:43:23.121901 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\": not found" containerID="761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba" May 13 00:43:23.121969 kubelet[1911]: I0513 00:43:23.121916 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba"} err="failed to get container status \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"761f4d8259f2c11b3c4b45175b1870bb411efbe6c0db863d11fc9c6ff2abd0ba\": not found" May 13 00:43:23.121969 kubelet[1911]: I0513 00:43:23.121927 1911 scope.go:117] "RemoveContainer" containerID="376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030" May 13 00:43:23.122095 env[1200]: time="2025-05-13T00:43:23.122056834Z" level=error msg="ContainerStatus for \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\": not found" May 13 00:43:23.122191 kubelet[1911]: E0513 00:43:23.122176 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\": not found" containerID="376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030" May 13 00:43:23.122234 kubelet[1911]: I0513 00:43:23.122193 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030"} err="failed to get container status \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\": rpc error: code = NotFound desc = an error occurred when try to find container \"376edfb270ca48309633d894636c0ef85cc509a4116b048063b0ba0737daa030\": not found" May 13 00:43:23.122234 kubelet[1911]: I0513 00:43:23.122205 1911 scope.go:117] "RemoveContainer" containerID="5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba" May 13 00:43:23.122355 env[1200]: time="2025-05-13T00:43:23.122317881Z" level=error msg="ContainerStatus for \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\": not found" May 13 00:43:23.122493 kubelet[1911]: E0513 00:43:23.122447 1911 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\": not found" containerID="5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba" May 13 00:43:23.122493 kubelet[1911]: I0513 00:43:23.122487 1911 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba"} err="failed to get container status \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fe91f90bb1599dc8daa6b09dc6c140922ca82f0a20749e8033d548f02aaa5ba\": not found" May 13 00:43:23.166762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a-rootfs.mount: Deactivated successfully. May 13 00:43:23.166877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0f6e06bbf395b9208476db66fc5538b31d78a2a25132ef88107889621bd005a-shm.mount: Deactivated successfully. May 13 00:43:23.166935 systemd[1]: var-lib-kubelet-pods-1bb07763\x2da021\x2d4348\x2dace4\x2d06a913442246-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxpq2m.mount: Deactivated successfully. May 13 00:43:23.166992 systemd[1]: var-lib-kubelet-pods-35319098\x2d239c\x2d4ce5\x2d987f\x2d741bdd6be0b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4grcb.mount: Deactivated successfully. May 13 00:43:23.167049 systemd[1]: var-lib-kubelet-pods-35319098\x2d239c\x2d4ce5\x2d987f\x2d741bdd6be0b5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:23.167101 systemd[1]: var-lib-kubelet-pods-35319098\x2d239c\x2d4ce5\x2d987f\x2d741bdd6be0b5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:23.402962 kubelet[1911]: I0513 00:43:23.402828 1911 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb07763-a021-4348-ace4-06a913442246" path="/var/lib/kubelet/pods/1bb07763-a021-4348-ace4-06a913442246/volumes" May 13 00:43:23.403261 kubelet[1911]: I0513 00:43:23.403205 1911 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35319098-239c-4ce5-987f-741bdd6be0b5" path="/var/lib/kubelet/pods/35319098-239c-4ce5-987f-741bdd6be0b5/volumes" May 13 00:43:24.146915 systemd[1]: Started sshd@24-10.0.0.58:22-10.0.0.1:38086.service. May 13 00:43:24.158670 sshd[3547]: pam_unix(sshd:session): session closed for user core May 13 00:43:24.161040 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:56456.service: Deactivated successfully. May 13 00:43:24.161852 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:43:24.163022 systemd-logind[1195]: Session 24 logged out. Waiting for processes to exit. May 13 00:43:24.163818 systemd-logind[1195]: Removed session 24. May 13 00:43:24.183794 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 38086 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:24.184949 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:24.188091 systemd-logind[1195]: New session 25 of user core. May 13 00:43:24.188857 systemd[1]: Started session-25.scope. May 13 00:43:24.688020 sshd[3705]: pam_unix(sshd:session): session closed for user core May 13 00:43:24.690369 systemd[1]: Started sshd@25-10.0.0.58:22-10.0.0.1:38090.service. May 13 00:43:24.691559 systemd-logind[1195]: Session 25 logged out. Waiting for processes to exit. May 13 00:43:24.694713 systemd[1]: sshd@24-10.0.0.58:22-10.0.0.1:38086.service: Deactivated successfully. May 13 00:43:24.697634 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:43:24.699664 systemd-logind[1195]: Removed session 25. May 13 00:43:24.712293 kubelet[1911]: I0513 00:43:24.712243 1911 memory_manager.go:355] "RemoveStaleState removing state" podUID="1bb07763-a021-4348-ace4-06a913442246" containerName="cilium-operator" May 13 00:43:24.712293 kubelet[1911]: I0513 00:43:24.712279 1911 memory_manager.go:355] "RemoveStaleState removing state" podUID="35319098-239c-4ce5-987f-741bdd6be0b5" containerName="cilium-agent" May 13 00:43:24.720000 systemd[1]: Created slice kubepods-burstable-pod9fc3915f_afdf_417a_8b2e_4d0ecc31bf2e.slice. May 13 00:43:24.724594 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 38090 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:24.726258 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:24.732780 systemd-logind[1195]: New session 26 of user core. May 13 00:43:24.733331 systemd[1]: Started session-26.scope. May 13 00:43:24.822677 kubelet[1911]: I0513 00:43:24.822635 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzqg9\" (UniqueName: \"kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-kube-api-access-kzqg9\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822677 kubelet[1911]: I0513 00:43:24.822687 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-bpf-maps\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822915 kubelet[1911]: I0513 00:43:24.822714 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-clustermesh-secrets\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822915 kubelet[1911]: I0513 00:43:24.822736 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-config-path\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822915 kubelet[1911]: I0513 00:43:24.822752 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-net\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822915 kubelet[1911]: I0513 00:43:24.822774 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hubble-tls\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822915 kubelet[1911]: I0513 00:43:24.822792 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cni-path\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.822915 kubelet[1911]: I0513 00:43:24.822864 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-etc-cni-netd\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823047 kubelet[1911]: I0513 00:43:24.822890 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-kernel\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823047 kubelet[1911]: I0513 00:43:24.822916 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hostproc\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823047 kubelet[1911]: I0513 00:43:24.822934 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-cgroup\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823047 kubelet[1911]: I0513 00:43:24.822951 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-lib-modules\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823047 kubelet[1911]: I0513 00:43:24.822966 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-xtables-lock\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823047 kubelet[1911]: I0513 00:43:24.822994 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-run\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.823189 kubelet[1911]: I0513 00:43:24.823011 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-ipsec-secrets\") pod \"cilium-j4cfz\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " pod="kube-system/cilium-j4cfz" May 13 00:43:24.961528 sshd[3717]: pam_unix(sshd:session): session closed for user core May 13 00:43:24.963788 systemd[1]: Started sshd@26-10.0.0.58:22-10.0.0.1:38096.service. May 13 00:43:24.966132 systemd[1]: sshd@25-10.0.0.58:22-10.0.0.1:38090.service: Deactivated successfully. May 13 00:43:24.966621 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:43:24.967385 systemd-logind[1195]: Session 26 logged out. Waiting for processes to exit. May 13 00:43:24.968043 systemd-logind[1195]: Removed session 26. May 13 00:43:24.991590 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 38096 ssh2: RSA SHA256:DdQ03puPlrcMVvAygFcBmS1VmnEhwtAiKRhWokZsFN8 May 13 00:43:24.992845 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:24.996084 systemd-logind[1195]: New session 27 of user core. May 13 00:43:24.996783 systemd[1]: Started session-27.scope. May 13 00:43:25.055864 kubelet[1911]: E0513 00:43:25.047646 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:25.056041 env[1200]: time="2025-05-13T00:43:25.048214093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j4cfz,Uid:9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e,Namespace:kube-system,Attempt:0,}" May 13 00:43:25.067122 env[1200]: time="2025-05-13T00:43:25.067012999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:43:25.067122 env[1200]: time="2025-05-13T00:43:25.067091318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:43:25.067326 env[1200]: time="2025-05-13T00:43:25.067127417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:43:25.067359 env[1200]: time="2025-05-13T00:43:25.067293322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935 pid=3749 runtime=io.containerd.runc.v2 May 13 00:43:25.087590 systemd[1]: Started cri-containerd-2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935.scope. May 13 00:43:25.121184 env[1200]: time="2025-05-13T00:43:25.121133658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j4cfz,Uid:9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935\"" May 13 00:43:25.121778 kubelet[1911]: E0513 00:43:25.121746 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:25.124380 env[1200]: time="2025-05-13T00:43:25.124348042Z" level=info msg="CreateContainer within sandbox \"2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:43:25.146260 env[1200]: time="2025-05-13T00:43:25.146192261Z" level=info msg="CreateContainer within sandbox \"2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\"" May 13 00:43:25.147393 env[1200]: time="2025-05-13T00:43:25.147329183Z" level=info msg="StartContainer for \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\"" May 13 00:43:25.163874 systemd[1]: Started cri-containerd-65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846.scope. May 13 00:43:25.173173 systemd[1]: cri-containerd-65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846.scope: Deactivated successfully. May 13 00:43:25.173420 systemd[1]: Stopped cri-containerd-65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846.scope. May 13 00:43:25.191475 env[1200]: time="2025-05-13T00:43:25.191412031Z" level=info msg="shim disconnected" id=65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846 May 13 00:43:25.191475 env[1200]: time="2025-05-13T00:43:25.191466785Z" level=warning msg="cleaning up after shim disconnected" id=65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846 namespace=k8s.io May 13 00:43:25.191475 env[1200]: time="2025-05-13T00:43:25.191475752Z" level=info msg="cleaning up dead shim" May 13 00:43:25.198336 env[1200]: time="2025-05-13T00:43:25.198288620Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3806 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T00:43:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 13 00:43:25.198844 env[1200]: time="2025-05-13T00:43:25.198706856Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" May 13 00:43:25.199029 env[1200]: time="2025-05-13T00:43:25.198937946Z" level=error msg="Failed to pipe stdout of container \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\"" error="reading from a closed fifo" May 13 00:43:25.199085 env[1200]: time="2025-05-13T00:43:25.198980086Z" level=error msg="Failed to pipe stderr of container \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\"" error="reading from a closed fifo" May 13 00:43:25.204702 env[1200]: time="2025-05-13T00:43:25.204624682Z" level=error msg="StartContainer for \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 13 00:43:25.205018 kubelet[1911]: E0513 00:43:25.204956 1911 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846" May 13 00:43:25.205197 kubelet[1911]: E0513 00:43:25.205174 1911 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 13 00:43:25.205197 kubelet[1911]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 13 00:43:25.205197 kubelet[1911]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 13 00:43:25.205197 kubelet[1911]: rm /hostbin/cilium-mount May 13 00:43:25.205321 kubelet[1911]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzqg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-j4cfz_kube-system(9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 13 00:43:25.205321 kubelet[1911]: > logger="UnhandledError" May 13 00:43:25.206399 kubelet[1911]: E0513 00:43:25.206345 1911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-j4cfz" podUID="9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" May 13 00:43:25.590987 env[1200]: time="2025-05-13T00:43:25.590952203Z" level=info msg="StopPodSandbox for \"2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935\"" May 13 00:43:25.591247 env[1200]: time="2025-05-13T00:43:25.591203131Z" level=info msg="Container to stop \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:25.596018 systemd[1]: cri-containerd-2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935.scope: Deactivated successfully. May 13 00:43:25.615787 env[1200]: time="2025-05-13T00:43:25.615689395Z" level=info msg="shim disconnected" id=2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935 May 13 00:43:25.615946 env[1200]: time="2025-05-13T00:43:25.615790867Z" level=warning msg="cleaning up after shim disconnected" id=2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935 namespace=k8s.io May 13 00:43:25.615946 env[1200]: time="2025-05-13T00:43:25.615819953Z" level=info msg="cleaning up dead shim" May 13 00:43:25.622153 env[1200]: time="2025-05-13T00:43:25.622110348Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3836 runtime=io.containerd.runc.v2\n" May 13 00:43:25.622447 env[1200]: time="2025-05-13T00:43:25.622390760Z" level=info msg="TearDown network for sandbox \"2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935\" successfully" May 13 00:43:25.622447 env[1200]: time="2025-05-13T00:43:25.622420577Z" level=info msg="StopPodSandbox for \"2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935\" returns successfully" May 13 00:43:25.726978 kubelet[1911]: I0513 00:43:25.726914 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzqg9\" (UniqueName: \"kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-kube-api-access-kzqg9\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.726978 kubelet[1911]: I0513 00:43:25.726962 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cni-path\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.726992 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-net\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727015 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-etc-cni-netd\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727033 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-kernel\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727047 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hostproc\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727061 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-lib-modules\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727072 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-run\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727089 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-xtables-lock\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727072 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727118 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hubble-tls\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727187 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-cgroup\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727207 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-bpf-maps\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727229 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-clustermesh-secrets\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727250 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-config-path\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727264 1911 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-ipsec-secrets\") pod \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\" (UID: \"9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e\") " May 13 00:43:25.727364 kubelet[1911]: I0513 00:43:25.727316 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.727761 kubelet[1911]: I0513 00:43:25.727636 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cni-path" (OuterVolumeSpecName: "cni-path") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727761 kubelet[1911]: I0513 00:43:25.727692 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727761 kubelet[1911]: I0513 00:43:25.727713 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727874 kubelet[1911]: I0513 00:43:25.727829 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727874 kubelet[1911]: I0513 00:43:25.727866 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727924 kubelet[1911]: I0513 00:43:25.727884 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hostproc" (OuterVolumeSpecName: "hostproc") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.727924 kubelet[1911]: I0513 00:43:25.727902 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.728055 kubelet[1911]: I0513 00:43:25.728028 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.728113 kubelet[1911]: I0513 00:43:25.728057 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:43:25.729696 kubelet[1911]: I0513 00:43:25.729667 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:43:25.729762 kubelet[1911]: I0513 00:43:25.729753 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-kube-api-access-kzqg9" (OuterVolumeSpecName: "kube-api-access-kzqg9") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "kube-api-access-kzqg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:43:25.730319 kubelet[1911]: I0513 00:43:25.730301 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:43:25.730631 kubelet[1911]: I0513 00:43:25.730608 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:43:25.731723 kubelet[1911]: I0513 00:43:25.731696 1911 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" (UID: "9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:43:25.828018 kubelet[1911]: I0513 00:43:25.827983 1911 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828018 kubelet[1911]: I0513 00:43:25.828004 1911 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828018 kubelet[1911]: I0513 00:43:25.828011 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828018 kubelet[1911]: I0513 00:43:25.828018 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828027 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828034 1911 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828041 1911 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828049 1911 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828055 1911 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzqg9\" (UniqueName: \"kubernetes.io/projected/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-kube-api-access-kzqg9\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828062 1911 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828069 1911 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828076 1911 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828083 1911 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.828266 kubelet[1911]: I0513 00:43:25.828089 1911 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:43:25.927988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935-rootfs.mount: Deactivated successfully. May 13 00:43:25.928086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2147ec0153cf3071aa6db0470298562b1f2ad84107d8f5ed9a380a80bf885935-shm.mount: Deactivated successfully. May 13 00:43:25.928164 systemd[1]: var-lib-kubelet-pods-9fc3915f\x2dafdf\x2d417a\x2d8b2e\x2d4d0ecc31bf2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkzqg9.mount: Deactivated successfully. May 13 00:43:25.928230 systemd[1]: var-lib-kubelet-pods-9fc3915f\x2dafdf\x2d417a\x2d8b2e\x2d4d0ecc31bf2e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:25.928301 systemd[1]: var-lib-kubelet-pods-9fc3915f\x2dafdf\x2d417a\x2d8b2e\x2d4d0ecc31bf2e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:25.928366 systemd[1]: var-lib-kubelet-pods-9fc3915f\x2dafdf\x2d417a\x2d8b2e\x2d4d0ecc31bf2e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:43:26.593873 kubelet[1911]: I0513 00:43:26.593831 1911 scope.go:117] "RemoveContainer" containerID="65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846" May 13 00:43:26.594525 env[1200]: time="2025-05-13T00:43:26.594485068Z" level=info msg="RemoveContainer for \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\"" May 13 00:43:26.597608 systemd[1]: Removed slice kubepods-burstable-pod9fc3915f_afdf_417a_8b2e_4d0ecc31bf2e.slice. May 13 00:43:26.739192 env[1200]: time="2025-05-13T00:43:26.739127766Z" level=info msg="RemoveContainer for \"65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846\" returns successfully" May 13 00:43:26.755721 kubelet[1911]: I0513 00:43:26.755684 1911 memory_manager.go:355] "RemoveStaleState removing state" podUID="9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" containerName="mount-cgroup" May 13 00:43:26.761219 systemd[1]: Created slice kubepods-burstable-podd041e233_089c_45b5_9fdb_d5ea88481845.slice. May 13 00:43:26.833016 kubelet[1911]: I0513 00:43:26.832964 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-host-proc-sys-kernel\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833016 kubelet[1911]: I0513 00:43:26.833006 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-cni-path\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833033 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-hostproc\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833053 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-host-proc-sys-net\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833071 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lh9j\" (UniqueName: \"kubernetes.io/projected/d041e233-089c-45b5-9fdb-d5ea88481845-kube-api-access-9lh9j\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833100 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d041e233-089c-45b5-9fdb-d5ea88481845-cilium-config-path\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833123 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d041e233-089c-45b5-9fdb-d5ea88481845-cilium-ipsec-secrets\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833141 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-bpf-maps\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833159 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-lib-modules\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833193 kubelet[1911]: I0513 00:43:26.833189 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-xtables-lock\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833398 kubelet[1911]: I0513 00:43:26.833218 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-etc-cni-netd\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833398 kubelet[1911]: I0513 00:43:26.833236 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-cilium-run\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833398 kubelet[1911]: I0513 00:43:26.833261 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d041e233-089c-45b5-9fdb-d5ea88481845-cilium-cgroup\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833398 kubelet[1911]: I0513 00:43:26.833281 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d041e233-089c-45b5-9fdb-d5ea88481845-hubble-tls\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:26.833398 kubelet[1911]: I0513 00:43:26.833300 1911 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d041e233-089c-45b5-9fdb-d5ea88481845-clustermesh-secrets\") pod \"cilium-qjml5\" (UID: \"d041e233-089c-45b5-9fdb-d5ea88481845\") " pod="kube-system/cilium-qjml5" May 13 00:43:27.063479 kubelet[1911]: E0513 00:43:27.063353 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:27.064192 env[1200]: time="2025-05-13T00:43:27.064019409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjml5,Uid:d041e233-089c-45b5-9fdb-d5ea88481845,Namespace:kube-system,Attempt:0,}" May 13 00:43:27.075758 env[1200]: time="2025-05-13T00:43:27.075702263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:43:27.075758 env[1200]: time="2025-05-13T00:43:27.075731960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:43:27.075758 env[1200]: time="2025-05-13T00:43:27.075741248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:43:27.075983 env[1200]: time="2025-05-13T00:43:27.075887206Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe pid=3865 runtime=io.containerd.runc.v2 May 13 00:43:27.085029 systemd[1]: Started cri-containerd-9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe.scope. May 13 00:43:27.103621 env[1200]: time="2025-05-13T00:43:27.103560091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjml5,Uid:d041e233-089c-45b5-9fdb-d5ea88481845,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\"" May 13 00:43:27.104299 kubelet[1911]: E0513 00:43:27.104266 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:27.106224 env[1200]: time="2025-05-13T00:43:27.106193303Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:43:27.402831 kubelet[1911]: I0513 00:43:27.402770 1911 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e" path="/var/lib/kubelet/pods/9fc3915f-afdf-417a-8b2e-4d0ecc31bf2e/volumes" May 13 00:43:27.463873 kubelet[1911]: E0513 00:43:27.463829 1911 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:27.541689 env[1200]: time="2025-05-13T00:43:27.541618252Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a\"" May 13 00:43:27.542178 env[1200]: time="2025-05-13T00:43:27.542151105Z" level=info msg="StartContainer for \"0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a\"" May 13 00:43:27.554654 systemd[1]: Started cri-containerd-0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a.scope. May 13 00:43:27.593927 env[1200]: time="2025-05-13T00:43:27.593837622Z" level=info msg="StartContainer for \"0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a\" returns successfully" May 13 00:43:27.597269 systemd[1]: cri-containerd-0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a.scope: Deactivated successfully. May 13 00:43:27.599560 kubelet[1911]: E0513 00:43:27.599525 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:27.645387 env[1200]: time="2025-05-13T00:43:27.645320823Z" level=info msg="shim disconnected" id=0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a May 13 00:43:27.645387 env[1200]: time="2025-05-13T00:43:27.645371159Z" level=warning msg="cleaning up after shim disconnected" id=0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a namespace=k8s.io May 13 00:43:27.645387 env[1200]: time="2025-05-13T00:43:27.645380296Z" level=info msg="cleaning up dead shim" May 13 00:43:27.652302 env[1200]: time="2025-05-13T00:43:27.652253757Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" May 13 00:43:28.296488 kubelet[1911]: W0513 00:43:28.296438 1911 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fc3915f_afdf_417a_8b2e_4d0ecc31bf2e.slice/cri-containerd-65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846.scope WatchSource:0}: container "65fad2df9dd175ea373a94c8d5ccefdf1a5a3f1cb8eae28fa7c3f56766181846" in namespace "k8s.io": not found May 13 00:43:28.602573 kubelet[1911]: E0513 00:43:28.602544 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:28.604062 env[1200]: time="2025-05-13T00:43:28.604017603Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:43:28.672586 env[1200]: time="2025-05-13T00:43:28.672504720Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a\"" May 13 00:43:28.673252 env[1200]: time="2025-05-13T00:43:28.673140040Z" level=info msg="StartContainer for \"70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a\"" May 13 00:43:28.691937 systemd[1]: Started cri-containerd-70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a.scope. May 13 00:43:28.714902 env[1200]: time="2025-05-13T00:43:28.714851024Z" level=info msg="StartContainer for \"70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a\" returns successfully" May 13 00:43:28.718055 systemd[1]: cri-containerd-70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a.scope: Deactivated successfully. May 13 00:43:28.756727 env[1200]: time="2025-05-13T00:43:28.756665235Z" level=info msg="shim disconnected" id=70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a May 13 00:43:28.756727 env[1200]: time="2025-05-13T00:43:28.756725439Z" level=warning msg="cleaning up after shim disconnected" id=70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a namespace=k8s.io May 13 00:43:28.756727 env[1200]: time="2025-05-13T00:43:28.756736761Z" level=info msg="cleaning up dead shim" May 13 00:43:28.763534 env[1200]: time="2025-05-13T00:43:28.763489777Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" May 13 00:43:28.938293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a-rootfs.mount: Deactivated successfully. May 13 00:43:29.007316 kubelet[1911]: I0513 00:43:29.007259 1911 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:43:29Z","lastTransitionTime":"2025-05-13T00:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:43:29.605782 kubelet[1911]: E0513 00:43:29.605746 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:29.607368 env[1200]: time="2025-05-13T00:43:29.607316558Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:43:29.639610 env[1200]: time="2025-05-13T00:43:29.639547452Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8\"" May 13 00:43:29.640284 env[1200]: time="2025-05-13T00:43:29.640240692Z" level=info msg="StartContainer for \"cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8\"" May 13 00:43:29.660031 systemd[1]: Started cri-containerd-cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8.scope. May 13 00:43:29.684502 env[1200]: time="2025-05-13T00:43:29.684447029Z" level=info msg="StartContainer for \"cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8\" returns successfully" May 13 00:43:29.686392 systemd[1]: cri-containerd-cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8.scope: Deactivated successfully. May 13 00:43:29.841760 env[1200]: time="2025-05-13T00:43:29.841699447Z" level=info msg="shim disconnected" id=cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8 May 13 00:43:29.841760 env[1200]: time="2025-05-13T00:43:29.841748651Z" level=warning msg="cleaning up after shim disconnected" id=cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8 namespace=k8s.io May 13 00:43:29.841760 env[1200]: time="2025-05-13T00:43:29.841757448Z" level=info msg="cleaning up dead shim" May 13 00:43:29.847555 env[1200]: time="2025-05-13T00:43:29.847534919Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4064 runtime=io.containerd.runc.v2\n" May 13 00:43:29.948045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8-rootfs.mount: Deactivated successfully. May 13 00:43:30.609496 kubelet[1911]: E0513 00:43:30.609462 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:30.611433 env[1200]: time="2025-05-13T00:43:30.611395704Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:43:30.681443 env[1200]: time="2025-05-13T00:43:30.681385442Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3\"" May 13 00:43:30.681951 env[1200]: time="2025-05-13T00:43:30.681907547Z" level=info msg="StartContainer for \"fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3\"" May 13 00:43:30.695923 systemd[1]: Started cri-containerd-fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3.scope. May 13 00:43:30.713289 systemd[1]: cri-containerd-fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3.scope: Deactivated successfully. May 13 00:43:30.714755 env[1200]: time="2025-05-13T00:43:30.714714223Z" level=info msg="StartContainer for \"fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3\" returns successfully" May 13 00:43:30.733316 env[1200]: time="2025-05-13T00:43:30.733261600Z" level=info msg="shim disconnected" id=fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3 May 13 00:43:30.733316 env[1200]: time="2025-05-13T00:43:30.733307797Z" level=warning msg="cleaning up after shim disconnected" id=fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3 namespace=k8s.io May 13 00:43:30.733316 env[1200]: time="2025-05-13T00:43:30.733317346Z" level=info msg="cleaning up dead shim" May 13 00:43:30.739487 env[1200]: time="2025-05-13T00:43:30.739437593Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4118 runtime=io.containerd.runc.v2\n" May 13 00:43:30.948186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3-rootfs.mount: Deactivated successfully. May 13 00:43:31.404065 kubelet[1911]: W0513 00:43:31.404013 1911 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd041e233_089c_45b5_9fdb_d5ea88481845.slice/cri-containerd-0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a.scope WatchSource:0}: task 0df193ba1f508a5ee17a4f9bb647121e9a6b33d6a896ebef9676ce727e471f9a not found: not found May 13 00:43:31.613471 kubelet[1911]: E0513 00:43:31.613437 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:31.615130 env[1200]: time="2025-05-13T00:43:31.615096618Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:43:31.780275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560347692.mount: Deactivated successfully. May 13 00:43:31.826634 env[1200]: time="2025-05-13T00:43:31.826588875Z" level=info msg="CreateContainer within sandbox \"9c0337f1dd3e85ad43dff884b82fe1031a5cfb4fa16059d1124de7c8f8cb0ebe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f06eda5a04b5d9e8f86963aa7b6fc511df6193785c7887045288ddf55f1b7a51\"" May 13 00:43:31.827272 env[1200]: time="2025-05-13T00:43:31.827235677Z" level=info msg="StartContainer for \"f06eda5a04b5d9e8f86963aa7b6fc511df6193785c7887045288ddf55f1b7a51\"" May 13 00:43:31.840354 systemd[1]: Started cri-containerd-f06eda5a04b5d9e8f86963aa7b6fc511df6193785c7887045288ddf55f1b7a51.scope. May 13 00:43:31.890873 env[1200]: time="2025-05-13T00:43:31.890825035Z" level=info msg="StartContainer for \"f06eda5a04b5d9e8f86963aa7b6fc511df6193785c7887045288ddf55f1b7a51\" returns successfully" May 13 00:43:32.134834 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:43:32.401491 kubelet[1911]: E0513 00:43:32.401370 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:32.401624 kubelet[1911]: E0513 00:43:32.401516 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:32.617889 kubelet[1911]: E0513 00:43:32.617851 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:32.630454 kubelet[1911]: I0513 00:43:32.630390 1911 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qjml5" podStartSLOduration=6.63037091 podStartE2EDuration="6.63037091s" podCreationTimestamp="2025-05-13 00:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:43:32.630115935 +0000 UTC m=+95.303657836" watchObservedRunningTime="2025-05-13 00:43:32.63037091 +0000 UTC m=+95.303912811" May 13 00:43:33.620034 kubelet[1911]: E0513 00:43:33.620004 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:33.844063 systemd[1]: run-containerd-runc-k8s.io-f06eda5a04b5d9e8f86963aa7b6fc511df6193785c7887045288ddf55f1b7a51-runc.yhbr42.mount: Deactivated successfully. May 13 00:43:34.401577 kubelet[1911]: E0513 00:43:34.401539 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:34.513433 kubelet[1911]: W0513 00:43:34.513366 1911 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd041e233_089c_45b5_9fdb_d5ea88481845.slice/cri-containerd-70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a.scope WatchSource:0}: task 70e5343712f3efcd5a494b0cfe2f982bc0757046415a18570dcf53b2d56b8a1a not found: not found May 13 00:43:34.715466 systemd-networkd[1025]: lxc_health: Link UP May 13 00:43:34.724185 systemd-networkd[1025]: lxc_health: Gained carrier May 13 00:43:34.724830 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:43:35.065246 kubelet[1911]: E0513 00:43:35.065151 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:35.401774 kubelet[1911]: E0513 00:43:35.401729 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:35.623357 kubelet[1911]: E0513 00:43:35.623312 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:36.461059 systemd-networkd[1025]: lxc_health: Gained IPv6LL May 13 00:43:36.624969 kubelet[1911]: E0513 00:43:36.624935 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:37.619297 kubelet[1911]: W0513 00:43:37.619248 1911 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd041e233_089c_45b5_9fdb_d5ea88481845.slice/cri-containerd-cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8.scope WatchSource:0}: task cbeee4956184b6f61104c1ed781facdcf965ed6db4ffb56088b42597165c62e8 not found: not found May 13 00:43:40.166166 sshd[3733]: pam_unix(sshd:session): session closed for user core May 13 00:43:40.168326 systemd[1]: sshd@26-10.0.0.58:22-10.0.0.1:38096.service: Deactivated successfully. May 13 00:43:40.168981 systemd[1]: session-27.scope: Deactivated successfully. May 13 00:43:40.169459 systemd-logind[1195]: Session 27 logged out. Waiting for processes to exit. May 13 00:43:40.170059 systemd-logind[1195]: Removed session 27. May 13 00:43:40.401625 kubelet[1911]: E0513 00:43:40.401576 1911 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:40.724310 kubelet[1911]: W0513 00:43:40.724262 1911 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd041e233_089c_45b5_9fdb_d5ea88481845.slice/cri-containerd-fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3.scope WatchSource:0}: task fd2a6a084ef1e7a1c035db927225c65b7e7fa0f7837eee3baa40c0b42dca65f3 not found: not found