Jul 15 11:28:48.823587 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:28:48.823604 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:28:48.823613 kernel: BIOS-provided physical RAM map: Jul 15 11:28:48.823619 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 11:28:48.823624 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 15 11:28:48.823629 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 15 11:28:48.823636 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 15 11:28:48.823642 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 15 11:28:48.823647 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 15 11:28:48.823654 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 15 11:28:48.823659 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 15 11:28:48.823665 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 15 11:28:48.823670 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 15 11:28:48.823676 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 15 11:28:48.823683 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 15 11:28:48.823690 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 15 11:28:48.823696 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 15 11:28:48.823701 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:28:48.823707 kernel: NX (Execute Disable) protection: active Jul 15 11:28:48.823713 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 15 11:28:48.823719 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 15 11:28:48.823725 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 15 11:28:48.823730 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 15 11:28:48.823736 kernel: extended physical RAM map: Jul 15 11:28:48.823742 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 15 11:28:48.823751 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 15 11:28:48.823765 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 15 11:28:48.823774 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 15 11:28:48.823788 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 15 11:28:48.823796 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 15 11:28:48.823804 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 15 11:28:48.823809 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Jul 15 11:28:48.823815 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Jul 15 11:28:48.823821 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Jul 15 11:28:48.823827 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Jul 15 11:28:48.823832 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Jul 15 11:28:48.823841 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 15 11:28:48.823847 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 15 11:28:48.823853 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 15 11:28:48.823859 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 15 11:28:48.823867 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 15 11:28:48.823873 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 15 11:28:48.823880 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:28:48.823887 kernel: efi: EFI v2.70 by EDK II Jul 15 11:28:48.823894 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Jul 15 11:28:48.823900 kernel: random: crng init done Jul 15 11:28:48.823906 kernel: SMBIOS 2.8 present. Jul 15 11:28:48.823913 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 15 11:28:48.823919 kernel: Hypervisor detected: KVM Jul 15 11:28:48.823925 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 11:28:48.823931 kernel: kvm-clock: cpu 0, msr 5b19b001, primary cpu clock Jul 15 11:28:48.823938 kernel: kvm-clock: using sched offset of 4098404512 cycles Jul 15 11:28:48.823956 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 11:28:48.823963 kernel: tsc: Detected 2794.750 MHz processor Jul 15 11:28:48.823970 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:28:48.823976 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:28:48.823983 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 15 11:28:48.823990 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:28:48.823997 kernel: Using GB pages for direct mapping Jul 15 11:28:48.824003 kernel: Secure boot disabled Jul 15 11:28:48.824009 kernel: ACPI: Early table checksum verification disabled Jul 15 11:28:48.824017 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 15 11:28:48.824023 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 15 11:28:48.824030 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:28:48.824036 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:28:48.824043 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 15 11:28:48.824049 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:28:48.824055 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:28:48.824062 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:28:48.824068 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:28:48.824075 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 15 11:28:48.824082 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 15 11:28:48.824088 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 15 11:28:48.824094 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 15 11:28:48.824101 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 15 11:28:48.824107 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 15 11:28:48.824113 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 15 11:28:48.824120 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 15 11:28:48.824126 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 15 11:28:48.824133 kernel: No NUMA configuration found Jul 15 11:28:48.824140 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 15 11:28:48.824146 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 15 11:28:48.824153 kernel: Zone ranges: Jul 15 11:28:48.824159 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:28:48.824166 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 15 11:28:48.824172 kernel: Normal empty Jul 15 11:28:48.824178 kernel: Movable zone start for each node Jul 15 11:28:48.824185 kernel: Early memory node ranges Jul 15 11:28:48.824192 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 15 11:28:48.824198 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 15 11:28:48.824205 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 15 11:28:48.824211 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 15 11:28:48.824217 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 15 11:28:48.824224 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 15 11:28:48.824230 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 15 11:28:48.824236 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:28:48.824243 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 15 11:28:48.824249 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 15 11:28:48.824256 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:28:48.824263 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 15 11:28:48.824269 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 15 11:28:48.824276 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 15 11:28:48.824282 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 11:28:48.824288 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 11:28:48.824295 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:28:48.824301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 11:28:48.824307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 11:28:48.824315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 11:28:48.824321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 11:28:48.824328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 11:28:48.824334 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:28:48.824341 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 11:28:48.824347 kernel: TSC deadline timer available Jul 15 11:28:48.824353 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 11:28:48.824360 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 11:28:48.824366 kernel: kvm-guest: setup PV sched yield Jul 15 11:28:48.824373 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 15 11:28:48.824380 kernel: Booting paravirtualized kernel on KVM Jul 15 11:28:48.824390 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:28:48.824398 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 15 11:28:48.824405 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 15 11:28:48.824411 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 15 11:28:48.824418 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 11:28:48.824425 kernel: kvm-guest: setup async PF for cpu 0 Jul 15 11:28:48.824431 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Jul 15 11:28:48.824438 kernel: kvm-guest: PV spinlocks enabled Jul 15 11:28:48.824445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 11:28:48.824452 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 15 11:28:48.824459 kernel: Policy zone: DMA32 Jul 15 11:28:48.824467 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:28:48.824474 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:28:48.824481 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:28:48.824489 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:28:48.824517 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:28:48.824525 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 169308K reserved, 0K cma-reserved) Jul 15 11:28:48.824532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:28:48.824539 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:28:48.824546 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:28:48.824552 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:28:48.824559 kernel: rcu: RCU event tracing is enabled. Jul 15 11:28:48.824566 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:28:48.824575 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:28:48.824581 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:28:48.824588 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:28:48.824595 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:28:48.824602 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 11:28:48.824608 kernel: Console: colour dummy device 80x25 Jul 15 11:28:48.824615 kernel: printk: console [ttyS0] enabled Jul 15 11:28:48.824622 kernel: ACPI: Core revision 20210730 Jul 15 11:28:48.824629 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 11:28:48.824637 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:28:48.824643 kernel: x2apic enabled Jul 15 11:28:48.824650 kernel: Switched APIC routing to physical x2apic. Jul 15 11:28:48.824657 kernel: kvm-guest: setup PV IPIs Jul 15 11:28:48.824664 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:28:48.824671 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 11:28:48.824678 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 11:28:48.824684 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 11:28:48.824691 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 11:28:48.824699 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 11:28:48.824706 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:28:48.824713 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 11:28:48.824720 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 11:28:48.824726 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 11:28:48.824733 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 11:28:48.824740 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:28:48.824747 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:28:48.824754 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:28:48.824762 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:28:48.824769 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:28:48.824775 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:28:48.824782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 11:28:48.824789 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:28:48.824796 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:28:48.824803 kernel: LSM: Security Framework initializing Jul 15 11:28:48.824809 kernel: SELinux: Initializing. Jul 15 11:28:48.824816 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:28:48.824824 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:28:48.824832 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 11:28:48.824841 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 11:28:48.824850 kernel: ... version: 0 Jul 15 11:28:48.824859 kernel: ... bit width: 48 Jul 15 11:28:48.824868 kernel: ... generic registers: 6 Jul 15 11:28:48.824877 kernel: ... value mask: 0000ffffffffffff Jul 15 11:28:48.824886 kernel: ... max period: 00007fffffffffff Jul 15 11:28:48.824895 kernel: ... fixed-purpose events: 0 Jul 15 11:28:48.824903 kernel: ... event mask: 000000000000003f Jul 15 11:28:48.824910 kernel: signal: max sigframe size: 1776 Jul 15 11:28:48.824917 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:28:48.824924 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:28:48.824930 kernel: x86: Booting SMP configuration: Jul 15 11:28:48.824937 kernel: .... node #0, CPUs: #1 Jul 15 11:28:48.824954 kernel: kvm-clock: cpu 1, msr 5b19b041, secondary cpu clock Jul 15 11:28:48.824960 kernel: kvm-guest: setup async PF for cpu 1 Jul 15 11:28:48.824967 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Jul 15 11:28:48.824975 kernel: #2 Jul 15 11:28:48.824982 kernel: kvm-clock: cpu 2, msr 5b19b081, secondary cpu clock Jul 15 11:28:48.824990 kernel: kvm-guest: setup async PF for cpu 2 Jul 15 11:28:48.824997 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Jul 15 11:28:48.825003 kernel: #3 Jul 15 11:28:48.825010 kernel: kvm-clock: cpu 3, msr 5b19b0c1, secondary cpu clock Jul 15 11:28:48.825017 kernel: kvm-guest: setup async PF for cpu 3 Jul 15 11:28:48.825023 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Jul 15 11:28:48.825030 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:28:48.825037 kernel: smpboot: Max logical packages: 1 Jul 15 11:28:48.825045 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 11:28:48.825052 kernel: devtmpfs: initialized Jul 15 11:28:48.825058 kernel: x86/mm: Memory block size: 128MB Jul 15 11:28:48.825065 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 15 11:28:48.825072 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 15 11:28:48.825079 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 15 11:28:48.825086 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 15 11:28:48.825093 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 15 11:28:48.825100 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:28:48.825107 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:28:48.825114 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:28:48.825121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:28:48.825128 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:28:48.825134 kernel: audit: type=2000 audit(1752578928.750:1): state=initialized audit_enabled=0 res=1 Jul 15 11:28:48.825141 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:28:48.825148 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:28:48.825155 kernel: cpuidle: using governor menu Jul 15 11:28:48.825162 kernel: ACPI: bus type PCI registered Jul 15 11:28:48.825169 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:28:48.825176 kernel: dca service started, version 1.12.1 Jul 15 11:28:48.825183 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 15 11:28:48.825190 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 15 11:28:48.825196 kernel: PCI: Using configuration type 1 for base access Jul 15 11:28:48.825203 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:28:48.825210 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:28:48.825217 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:28:48.825225 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:28:48.825232 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:28:48.825238 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:28:48.825245 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:28:48.825252 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:28:48.825258 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:28:48.825265 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:28:48.825272 kernel: ACPI: Interpreter enabled Jul 15 11:28:48.825279 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 11:28:48.825285 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:28:48.825293 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:28:48.825300 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 11:28:48.825307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:28:48.825418 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:28:48.825489 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 11:28:48.825576 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 11:28:48.825586 kernel: PCI host bridge to bus 0000:00 Jul 15 11:28:48.825659 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:28:48.825719 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 11:28:48.825777 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:28:48.825835 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 11:28:48.825893 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:28:48.825968 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 15 11:28:48.826028 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:28:48.826113 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 11:28:48.826188 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 11:28:48.826255 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 15 11:28:48.826322 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 15 11:28:48.826387 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 15 11:28:48.826454 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 15 11:28:48.826536 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:28:48.826616 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:28:48.826751 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 15 11:28:48.826863 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 15 11:28:48.826994 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 15 11:28:48.827082 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 11:28:48.827152 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 15 11:28:48.827221 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 15 11:28:48.827288 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 15 11:28:48.827360 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 11:28:48.827428 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 15 11:28:48.827495 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 15 11:28:48.827654 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 15 11:28:48.827723 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 15 11:28:48.827798 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 11:28:48.827866 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 11:28:48.827948 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 11:28:48.828017 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 15 11:28:48.828082 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 15 11:28:48.828152 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 11:28:48.828222 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 15 11:28:48.828232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 11:28:48.828239 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 11:28:48.828246 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:28:48.828253 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 11:28:48.828260 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 11:28:48.828266 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 11:28:48.828273 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 11:28:48.828280 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 11:28:48.828289 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 11:28:48.828296 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 11:28:48.828303 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 11:28:48.828309 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 11:28:48.828316 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 11:28:48.828323 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 11:28:48.828330 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 11:28:48.828337 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 11:28:48.828344 kernel: iommu: Default domain type: Translated Jul 15 11:28:48.828352 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:28:48.828417 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 11:28:48.828482 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:28:48.828563 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 11:28:48.828572 kernel: vgaarb: loaded Jul 15 11:28:48.828579 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:28:48.828586 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:28:48.828593 kernel: PTP clock support registered Jul 15 11:28:48.828600 kernel: Registered efivars operations Jul 15 11:28:48.828609 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:28:48.828616 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:28:48.828623 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 15 11:28:48.828630 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 15 11:28:48.828636 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Jul 15 11:28:48.828643 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Jul 15 11:28:48.828649 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 15 11:28:48.828656 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 15 11:28:48.828664 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 11:28:48.828671 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 11:28:48.828678 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 11:28:48.828685 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:28:48.828692 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:28:48.828699 kernel: pnp: PnP ACPI init Jul 15 11:28:48.828771 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 11:28:48.828781 kernel: pnp: PnP ACPI: found 6 devices Jul 15 11:28:48.828790 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:28:48.828797 kernel: NET: Registered PF_INET protocol family Jul 15 11:28:48.828804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:28:48.828811 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:28:48.828818 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:28:48.828824 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:28:48.828831 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:28:48.828838 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:28:48.828845 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:28:48.828853 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:28:48.828859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:28:48.828866 kernel: NET: Registered PF_XDP protocol family Jul 15 11:28:48.828934 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 15 11:28:48.829016 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 15 11:28:48.829077 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 11:28:48.829135 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 11:28:48.829193 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 11:28:48.829255 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 11:28:48.829313 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 11:28:48.829371 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 15 11:28:48.829380 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:28:48.829387 kernel: Initialise system trusted keyrings Jul 15 11:28:48.829393 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:28:48.829400 kernel: Key type asymmetric registered Jul 15 11:28:48.829407 kernel: Asymmetric key parser 'x509' registered Jul 15 11:28:48.829414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:28:48.829422 kernel: io scheduler mq-deadline registered Jul 15 11:28:48.829429 kernel: io scheduler kyber registered Jul 15 11:28:48.829443 kernel: io scheduler bfq registered Jul 15 11:28:48.829452 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:28:48.829459 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 11:28:48.829467 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 11:28:48.829474 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 11:28:48.829481 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:28:48.829488 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:28:48.829510 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 11:28:48.829517 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:28:48.829524 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:28:48.829600 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 11:28:48.829610 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:28:48.829673 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 11:28:48.829735 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T11:28:48 UTC (1752578928) Jul 15 11:28:48.829795 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 11:28:48.829807 kernel: efifb: probing for efifb Jul 15 11:28:48.829814 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 15 11:28:48.829821 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 15 11:28:48.829828 kernel: efifb: scrolling: redraw Jul 15 11:28:48.829835 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 11:28:48.829843 kernel: Console: switching to colour frame buffer device 160x50 Jul 15 11:28:48.829850 kernel: fb0: EFI VGA frame buffer device Jul 15 11:28:48.829857 kernel: pstore: Registered efi as persistent store backend Jul 15 11:28:48.829864 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:28:48.829872 kernel: Segment Routing with IPv6 Jul 15 11:28:48.829879 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:28:48.829887 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:28:48.829895 kernel: Key type dns_resolver registered Jul 15 11:28:48.829902 kernel: IPI shorthand broadcast: enabled Jul 15 11:28:48.829910 kernel: sched_clock: Marking stable (420002593, 123226654)->(584239370, -41010123) Jul 15 11:28:48.829919 kernel: registered taskstats version 1 Jul 15 11:28:48.829926 kernel: Loading compiled-in X.509 certificates Jul 15 11:28:48.829933 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:28:48.829949 kernel: Key type .fscrypt registered Jul 15 11:28:48.829956 kernel: Key type fscrypt-provisioning registered Jul 15 11:28:48.829963 kernel: pstore: Using crash dump compression: deflate Jul 15 11:28:48.829970 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:28:48.829977 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:28:48.829987 kernel: ima: No architecture policies found Jul 15 11:28:48.829994 kernel: clk: Disabling unused clocks Jul 15 11:28:48.830025 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:28:48.830033 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:28:48.830040 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:28:48.830047 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:28:48.830054 kernel: Run /init as init process Jul 15 11:28:48.830061 kernel: with arguments: Jul 15 11:28:48.830068 kernel: /init Jul 15 11:28:48.830077 kernel: with environment: Jul 15 11:28:48.830084 kernel: HOME=/ Jul 15 11:28:48.830091 kernel: TERM=linux Jul 15 11:28:48.830098 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:28:48.830107 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:28:48.830129 systemd[1]: Detected virtualization kvm. Jul 15 11:28:48.830155 systemd[1]: Detected architecture x86-64. Jul 15 11:28:48.830166 systemd[1]: Running in initrd. Jul 15 11:28:48.830175 systemd[1]: No hostname configured, using default hostname. Jul 15 11:28:48.830183 systemd[1]: Hostname set to . Jul 15 11:28:48.830190 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:28:48.830198 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:28:48.830205 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:28:48.830213 systemd[1]: Reached target cryptsetup.target. Jul 15 11:28:48.830220 systemd[1]: Reached target paths.target. Jul 15 11:28:48.830228 systemd[1]: Reached target slices.target. Jul 15 11:28:48.830235 systemd[1]: Reached target swap.target. Jul 15 11:28:48.830244 systemd[1]: Reached target timers.target. Jul 15 11:28:48.830252 systemd[1]: Listening on iscsid.socket. Jul 15 11:28:48.830259 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:28:48.830267 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:28:48.830274 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:28:48.830282 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:28:48.830289 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:28:48.830298 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:28:48.830306 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:28:48.830313 systemd[1]: Reached target sockets.target. Jul 15 11:28:48.830321 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:28:48.830328 systemd[1]: Finished network-cleanup.service. Jul 15 11:28:48.830336 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:28:48.830343 systemd[1]: Starting systemd-journald.service... Jul 15 11:28:48.830355 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:28:48.830363 systemd[1]: Starting systemd-resolved.service... Jul 15 11:28:48.830372 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:28:48.830380 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:28:48.830387 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:28:48.830395 kernel: audit: type=1130 audit(1752578928.823:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.830403 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:28:48.830410 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:28:48.830421 systemd-journald[197]: Journal started Jul 15 11:28:48.830461 systemd-journald[197]: Runtime Journal (/run/log/journal/ea3ba73fd9f949a1b4a834e28afcd44c) is 6.0M, max 48.4M, 42.4M free. Jul 15 11:28:48.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.827033 systemd-modules-load[198]: Inserted module 'overlay' Jul 15 11:28:48.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.834798 kernel: audit: type=1130 audit(1752578928.828:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.839423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:28:48.839449 systemd[1]: Started systemd-journald.service. Jul 15 11:28:48.842722 kernel: audit: type=1130 audit(1752578928.839:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.843078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:28:48.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.847528 kernel: audit: type=1130 audit(1752578928.842:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.851343 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:28:48.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.853966 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:28:48.857693 kernel: audit: type=1130 audit(1752578928.853:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.856981 systemd-resolved[199]: Positive Trust Anchors: Jul 15 11:28:48.856992 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:28:48.857019 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:28:48.859630 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 15 11:28:48.872606 kernel: audit: type=1130 audit(1752578928.860:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.872680 dracut-cmdline[215]: dracut-dracut-053 Jul 15 11:28:48.872680 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:28:48.860458 systemd[1]: Started systemd-resolved.service. Jul 15 11:28:48.860771 systemd[1]: Reached target nss-lookup.target. Jul 15 11:28:48.901544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:28:48.906872 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 15 11:28:48.907913 kernel: Bridge firewalling registered Jul 15 11:28:48.924530 kernel: SCSI subsystem initialized Jul 15 11:28:48.935550 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:28:48.935573 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:28:48.935583 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:28:48.939533 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:28:48.939638 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 15 11:28:48.940557 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:28:48.947944 kernel: audit: type=1130 audit(1752578928.940:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.946492 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:28:48.959271 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:28:48.964406 kernel: iscsi: registered transport (tcp) Jul 15 11:28:48.964428 kernel: audit: type=1130 audit(1752578928.959:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:48.985727 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:28:48.985771 kernel: QLogic iSCSI HBA Driver Jul 15 11:28:49.014519 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:28:49.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:49.017122 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:28:49.020538 kernel: audit: type=1130 audit(1752578929.015:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:49.062555 kernel: raid6: avx2x4 gen() 27508 MB/s Jul 15 11:28:49.079541 kernel: raid6: avx2x4 xor() 8113 MB/s Jul 15 11:28:49.096550 kernel: raid6: avx2x2 gen() 32581 MB/s Jul 15 11:28:49.113534 kernel: raid6: avx2x2 xor() 19133 MB/s Jul 15 11:28:49.130539 kernel: raid6: avx2x1 gen() 26197 MB/s Jul 15 11:28:49.147534 kernel: raid6: avx2x1 xor() 15205 MB/s Jul 15 11:28:49.164530 kernel: raid6: sse2x4 gen() 14793 MB/s Jul 15 11:28:49.181559 kernel: raid6: sse2x4 xor() 6502 MB/s Jul 15 11:28:49.198551 kernel: raid6: sse2x2 gen() 10378 MB/s Jul 15 11:28:49.215551 kernel: raid6: sse2x2 xor() 8477 MB/s Jul 15 11:28:49.232542 kernel: raid6: sse2x1 gen() 12364 MB/s Jul 15 11:28:49.250197 kernel: raid6: sse2x1 xor() 7438 MB/s Jul 15 11:28:49.250272 kernel: raid6: using algorithm avx2x2 gen() 32581 MB/s Jul 15 11:28:49.250286 kernel: raid6: .... xor() 19133 MB/s, rmw enabled Jul 15 11:28:49.250974 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:28:49.263528 kernel: xor: automatically using best checksumming function avx Jul 15 11:28:49.351533 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:28:49.357335 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:28:49.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:49.358000 audit: BPF prog-id=7 op=LOAD Jul 15 11:28:49.358000 audit: BPF prog-id=8 op=LOAD Jul 15 11:28:49.359624 systemd[1]: Starting systemd-udevd.service... Jul 15 11:28:49.370984 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 15 11:28:49.374382 systemd[1]: Started systemd-udevd.service. Jul 15 11:28:49.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:49.377029 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:28:49.385633 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jul 15 11:28:49.406268 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:28:49.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:49.408607 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:28:49.439834 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:28:49.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:49.481548 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:28:49.490161 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:28:49.490180 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:28:49.490192 kernel: GPT:9289727 != 19775487 Jul 15 11:28:49.490204 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:28:49.490216 kernel: GPT:9289727 != 19775487 Jul 15 11:28:49.490227 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:28:49.490243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:28:49.502541 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:28:49.502588 kernel: AES CTR mode by8 optimization enabled Jul 15 11:28:49.506522 kernel: libata version 3.00 loaded. Jul 15 11:28:49.514523 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (441) Jul 15 11:28:49.519549 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 11:28:49.534622 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 11:28:49.534637 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 11:28:49.534722 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 11:28:49.534798 kernel: scsi host0: ahci Jul 15 11:28:49.534882 kernel: scsi host1: ahci Jul 15 11:28:49.534975 kernel: scsi host2: ahci Jul 15 11:28:49.535052 kernel: scsi host3: ahci Jul 15 11:28:49.535127 kernel: scsi host4: ahci Jul 15 11:28:49.535205 kernel: scsi host5: ahci Jul 15 11:28:49.535283 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 15 11:28:49.535292 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 15 11:28:49.535301 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 15 11:28:49.535310 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 15 11:28:49.535318 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 15 11:28:49.535327 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 15 11:28:49.520343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:28:49.523612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:28:49.528486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:28:49.544008 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:28:49.548698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:28:49.551012 systemd[1]: Starting disk-uuid.service... Jul 15 11:28:49.556545 disk-uuid[527]: Primary Header is updated. Jul 15 11:28:49.556545 disk-uuid[527]: Secondary Entries is updated. Jul 15 11:28:49.556545 disk-uuid[527]: Secondary Header is updated. Jul 15 11:28:49.560518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:28:49.563520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:28:49.850172 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 11:28:49.850249 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 11:28:49.850261 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 11:28:49.850272 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 11:28:49.850282 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 11:28:49.851529 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 11:28:49.852531 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 11:28:49.854021 kernel: ata3.00: applying bridge limits Jul 15 11:28:49.854032 kernel: ata3.00: configured for UDMA/100 Jul 15 11:28:49.854524 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 11:28:49.887529 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 11:28:49.903949 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:28:49.903966 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:28:50.565617 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:28:50.565680 disk-uuid[528]: The operation has completed successfully. Jul 15 11:28:50.582612 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:28:50.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.582686 systemd[1]: Finished disk-uuid.service. Jul 15 11:28:50.591229 systemd[1]: Starting verity-setup.service... Jul 15 11:28:50.603521 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 11:28:50.619955 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:28:50.620950 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:28:50.623375 systemd[1]: Finished verity-setup.service. Jul 15 11:28:50.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.678348 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:28:50.679831 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:28:50.679900 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:28:50.681830 systemd[1]: Starting ignition-setup.service... Jul 15 11:28:50.683793 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:28:50.689898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:28:50.689926 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:28:50.689936 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:28:50.697525 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:28:50.705328 systemd[1]: Finished ignition-setup.service. Jul 15 11:28:50.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.707061 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:28:50.744614 ignition[642]: Ignition 2.14.0 Jul 15 11:28:50.744627 ignition[642]: Stage: fetch-offline Jul 15 11:28:50.744706 ignition[642]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:28:50.744718 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:28:50.744841 ignition[642]: parsed url from cmdline: "" Jul 15 11:28:50.744846 ignition[642]: no config URL provided Jul 15 11:28:50.744852 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:28:50.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.749404 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:28:50.752000 audit: BPF prog-id=9 op=LOAD Jul 15 11:28:50.744861 ignition[642]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:28:50.744880 ignition[642]: op(1): [started] loading QEMU firmware config module Jul 15 11:28:50.753254 systemd[1]: Starting systemd-networkd.service... Jul 15 11:28:50.744896 ignition[642]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:28:50.749564 ignition[642]: op(1): [finished] loading QEMU firmware config module Jul 15 11:28:50.749582 ignition[642]: QEMU firmware config was not found. Ignoring... Jul 15 11:28:50.793724 ignition[642]: parsing config with SHA512: a0e6bb40a96c1ad49de25df87f92c0ef378c6ad3494a64757bfdd340084b86580ba6c79b49f876ed8ade37ce1ea34d1543fc0ec8597c9b8653f834b4725c2600 Jul 15 11:28:50.801469 unknown[642]: fetched base config from "system" Jul 15 11:28:50.801642 unknown[642]: fetched user config from "qemu" Jul 15 11:28:50.802151 ignition[642]: fetch-offline: fetch-offline passed Jul 15 11:28:50.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.803032 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:28:50.802198 ignition[642]: Ignition finished successfully Jul 15 11:28:50.810395 systemd-networkd[721]: lo: Link UP Jul 15 11:28:50.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.810403 systemd-networkd[721]: lo: Gained carrier Jul 15 11:28:50.810782 systemd-networkd[721]: Enumeration completed Jul 15 11:28:50.810854 systemd[1]: Started systemd-networkd.service. Jul 15 11:28:50.810967 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:28:50.811767 systemd-networkd[721]: eth0: Link UP Jul 15 11:28:50.811769 systemd-networkd[721]: eth0: Gained carrier Jul 15 11:28:50.812445 systemd[1]: Reached target network.target. Jul 15 11:28:50.813300 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:28:50.821222 systemd[1]: Starting ignition-kargs.service... Jul 15 11:28:50.823264 systemd[1]: Starting iscsiuio.service... Jul 15 11:28:50.824010 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:28:50.827992 systemd[1]: Started iscsiuio.service. Jul 15 11:28:50.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.830293 systemd[1]: Starting iscsid.service... Jul 15 11:28:50.830949 ignition[723]: Ignition 2.14.0 Jul 15 11:28:50.830955 ignition[723]: Stage: kargs Jul 15 11:28:50.831059 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:28:50.831070 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:28:50.834778 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:28:50.834778 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:28:50.834778 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:28:50.834778 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:28:50.834778 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:28:50.834778 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:28:50.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.832322 ignition[723]: kargs: kargs passed Jul 15 11:28:50.834708 systemd[1]: Started iscsid.service. Jul 15 11:28:50.832363 ignition[723]: Ignition finished successfully Jul 15 11:28:50.849723 systemd[1]: Finished ignition-kargs.service. Jul 15 11:28:50.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.852893 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:28:50.855541 systemd[1]: Starting ignition-disks.service... Jul 15 11:28:50.861934 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:28:50.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.862076 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:28:50.864977 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:28:50.866786 systemd[1]: Reached target remote-fs.target. Jul 15 11:28:50.869216 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:28:50.869691 ignition[734]: Ignition 2.14.0 Jul 15 11:28:50.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.869697 ignition[734]: Stage: disks Jul 15 11:28:50.871464 systemd[1]: Finished ignition-disks.service. Jul 15 11:28:50.869794 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:28:50.872440 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:28:50.869802 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:28:50.873862 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:28:50.870727 ignition[734]: disks: disks passed Jul 15 11:28:50.875145 systemd[1]: Reached target local-fs.target. Jul 15 11:28:50.870763 ignition[734]: Ignition finished successfully Jul 15 11:28:50.876496 systemd[1]: Reached target sysinit.target. Jul 15 11:28:50.876716 systemd[1]: Reached target basic.target. Jul 15 11:28:50.888387 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:28:50.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.889978 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:28:50.899416 systemd-fsck[754]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 15 11:28:50.904658 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:28:50.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.905322 systemd[1]: Mounting sysroot.mount... Jul 15 11:28:50.912526 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:28:50.912562 systemd[1]: Mounted sysroot.mount. Jul 15 11:28:50.912645 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:28:50.915930 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:28:50.916253 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:28:50.916279 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:28:50.916295 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:28:50.918767 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:28:50.920868 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:28:50.926206 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:28:50.927629 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:28:50.929578 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:28:50.933367 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:28:50.958058 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:28:50.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.960484 systemd[1]: Starting ignition-mount.service... Jul 15 11:28:50.962784 systemd[1]: Starting sysroot-boot.service... Jul 15 11:28:50.965172 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:28:50.971955 ignition[806]: INFO : Ignition 2.14.0 Jul 15 11:28:50.971955 ignition[806]: INFO : Stage: mount Jul 15 11:28:50.973985 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:28:50.973985 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:28:50.973985 ignition[806]: INFO : mount: mount passed Jul 15 11:28:50.973985 ignition[806]: INFO : Ignition finished successfully Jul 15 11:28:50.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:50.975096 systemd[1]: Finished ignition-mount.service. Jul 15 11:28:50.981171 systemd[1]: Finished sysroot-boot.service. Jul 15 11:28:50.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:51.629904 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:28:51.635522 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jul 15 11:28:51.635554 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:28:51.638083 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:28:51.638102 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:28:51.641151 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:28:51.641911 systemd[1]: Starting ignition-files.service... Jul 15 11:28:51.656011 ignition[835]: INFO : Ignition 2.14.0 Jul 15 11:28:51.656011 ignition[835]: INFO : Stage: files Jul 15 11:28:51.657646 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:28:51.657646 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:28:51.657646 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:28:51.661802 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:28:51.661802 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:28:51.661802 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:28:51.661802 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:28:51.661802 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:28:51.661802 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 11:28:51.661301 unknown[835]: wrote ssh authorized keys file for user: core Jul 15 11:28:51.673107 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 15 11:28:51.722590 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 11:28:51.890780 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 15 11:28:51.892959 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:28:51.892959 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 11:28:51.969920 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:28:52.053103 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:28:52.055031 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 15 11:28:52.456661 systemd-networkd[721]: eth0: Gained IPv6LL Jul 15 11:28:52.718373 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:28:52.979251 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 15 11:28:52.979251 ignition[835]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:28:52.983698 ignition[835]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:28:53.012849 ignition[835]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:28:53.015294 ignition[835]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:28:53.015294 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:28:53.015294 ignition[835]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:28:53.015294 ignition[835]: INFO : files: files passed Jul 15 11:28:53.015294 ignition[835]: INFO : Ignition finished successfully Jul 15 11:28:53.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.014464 systemd[1]: Finished ignition-files.service. Jul 15 11:28:53.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.016098 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:28:53.017948 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:28:53.029777 initrd-setup-root-after-ignition[860]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:28:53.018448 systemd[1]: Starting ignition-quench.service... Jul 15 11:28:53.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.032904 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:28:53.024250 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:28:53.027598 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:28:53.029717 systemd[1]: Finished ignition-quench.service. Jul 15 11:28:53.032856 systemd[1]: Reached target ignition-complete.target. Jul 15 11:28:53.038940 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:28:53.049875 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:28:53.049955 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:28:53.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.052560 systemd[1]: Reached target initrd-fs.target. Jul 15 11:28:53.052617 systemd[1]: Reached target initrd.target. Jul 15 11:28:53.054949 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:28:53.055788 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:28:53.069203 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:28:53.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.070067 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:28:53.079383 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:28:53.079584 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:28:53.081065 systemd[1]: Stopped target timers.target. Jul 15 11:28:53.084016 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:28:53.084161 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:28:53.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.086594 systemd[1]: Stopped target initrd.target. Jul 15 11:28:53.086755 systemd[1]: Stopped target basic.target. Jul 15 11:28:53.088100 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:28:53.089472 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:28:53.091038 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:28:53.094060 systemd[1]: Stopped target remote-fs.target. Jul 15 11:28:53.094182 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:28:53.095629 systemd[1]: Stopped target sysinit.target. Jul 15 11:28:53.097175 systemd[1]: Stopped target local-fs.target. Jul 15 11:28:53.097485 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:28:53.100426 systemd[1]: Stopped target swap.target. Jul 15 11:28:53.101178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:28:53.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.101268 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:28:53.102557 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:28:53.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.104749 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:28:53.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.104841 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:28:53.105546 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:28:53.105625 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:28:53.107740 systemd[1]: Stopped target paths.target. Jul 15 11:28:53.108668 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:28:53.112560 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:28:53.115039 systemd[1]: Stopped target slices.target. Jul 15 11:28:53.115202 systemd[1]: Stopped target sockets.target. Jul 15 11:28:53.116562 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:28:53.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.116700 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:28:53.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.117876 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:28:53.117990 systemd[1]: Stopped ignition-files.service. Jul 15 11:28:53.122265 systemd[1]: Stopping ignition-mount.service... Jul 15 11:28:53.124341 systemd[1]: Stopping iscsid.service... Jul 15 11:28:53.125619 iscsid[732]: iscsid shutting down. Jul 15 11:28:53.126092 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:28:53.127872 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:28:53.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.128036 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:28:53.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.130462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:28:53.130597 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:28:53.133967 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:28:53.134071 systemd[1]: Stopped iscsid.service. Jul 15 11:28:53.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.137542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:28:53.137626 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:28:53.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.139346 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:28:53.139380 systemd[1]: Closed iscsid.socket. Jul 15 11:28:53.141637 systemd[1]: Stopping iscsiuio.service... Jul 15 11:28:53.143815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:28:53.146326 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:28:53.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.148126 ignition[876]: INFO : Ignition 2.14.0 Jul 15 11:28:53.148126 ignition[876]: INFO : Stage: umount Jul 15 11:28:53.148126 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:28:53.148126 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:28:53.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.146409 systemd[1]: Stopped iscsiuio.service. Jul 15 11:28:53.158612 ignition[876]: INFO : umount: umount passed Jul 15 11:28:53.158612 ignition[876]: INFO : Ignition finished successfully Jul 15 11:28:53.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.148158 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:28:53.148194 systemd[1]: Closed iscsiuio.socket. Jul 15 11:28:53.149629 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:28:53.149708 systemd[1]: Stopped ignition-mount.service. Jul 15 11:28:53.151048 systemd[1]: Stopped target network.target. Jul 15 11:28:53.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.152392 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:28:53.152434 systemd[1]: Stopped ignition-disks.service. Jul 15 11:28:53.154309 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:28:53.154350 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:28:53.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.156999 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:28:53.157042 systemd[1]: Stopped ignition-setup.service. Jul 15 11:28:53.158726 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:28:53.160688 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:28:53.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.164544 systemd-networkd[721]: eth0: DHCPv6 lease lost Jul 15 11:28:53.181000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:28:53.166410 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:28:53.166483 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:28:53.184000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:28:53.168705 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:28:53.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.168738 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:28:53.171163 systemd[1]: Stopping network-cleanup.service... Jul 15 11:28:53.172054 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:28:53.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.172097 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:28:53.174042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:28:53.174080 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:28:53.175697 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:28:53.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.175735 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:28:53.176672 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:28:53.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.179266 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:28:53.179642 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:28:53.179722 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:28:53.209436 kernel: kauditd_printk_skb: 59 callbacks suppressed Jul 15 11:28:53.209458 kernel: audit: type=1131 audit(1752578933.203:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.209470 kernel: audit: type=1131 audit(1752578933.208:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.185048 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:28:53.225059 kernel: audit: type=1131 audit(1752578933.212:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.225073 kernel: audit: type=1130 audit(1752578933.216:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.225083 kernel: audit: type=1131 audit(1752578933.216:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.185114 systemd[1]: Stopped network-cleanup.service. Jul 15 11:28:53.187645 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:28:53.187774 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:28:53.190466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:28:53.190511 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:28:53.192049 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:28:53.192075 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:28:53.193915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:28:53.193955 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:28:53.195551 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:28:53.195582 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:28:53.197537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:28:53.197578 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:28:53.200116 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:28:53.202145 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 11:28:53.202185 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 15 11:28:53.207874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:28:53.207925 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:28:53.209483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:28:53.209574 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:28:53.213716 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 11:28:53.214123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:28:53.214201 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:28:53.250781 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:28:53.250893 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:28:53.257379 kernel: audit: type=1131 audit(1752578933.251:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.252641 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:28:53.263124 kernel: audit: type=1131 audit(1752578933.256:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:53.257391 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:28:53.257428 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:28:53.258178 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:28:53.274356 systemd[1]: Switching root. Jul 15 11:28:53.294060 systemd-journald[197]: Journal stopped Jul 15 11:28:56.177952 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Jul 15 11:28:56.178001 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:28:56.178016 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:28:56.178026 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:28:56.178041 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:28:56.178051 kernel: SELinux: policy capability open_perms=1 Jul 15 11:28:56.178060 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:28:56.178070 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:28:56.178079 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:28:56.178091 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:28:56.178100 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:28:56.178110 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:28:56.178120 kernel: audit: type=1403 audit(1752578933.349:77): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 11:28:56.178131 systemd[1]: Successfully loaded SELinux policy in 36.401ms. Jul 15 11:28:56.178146 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.353ms. Jul 15 11:28:56.178158 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:28:56.178169 systemd[1]: Detected virtualization kvm. Jul 15 11:28:56.178182 systemd[1]: Detected architecture x86-64. Jul 15 11:28:56.178192 systemd[1]: Detected first boot. Jul 15 11:28:56.178202 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:28:56.178213 kernel: audit: type=1400 audit(1752578933.922:78): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:28:56.178223 kernel: audit: type=1400 audit(1752578933.922:79): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:28:56.178235 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:28:56.178245 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:28:56.178256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:28:56.178267 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:28:56.178279 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:28:56.178289 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 11:28:56.178300 systemd[1]: Stopped initrd-switch-root.service. Jul 15 11:28:56.178311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 11:28:56.178321 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:28:56.178332 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:28:56.178344 systemd[1]: Created slice system-getty.slice. Jul 15 11:28:56.178355 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:28:56.178365 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:28:56.178375 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:28:56.178387 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:28:56.178398 systemd[1]: Created slice user.slice. Jul 15 11:28:56.178409 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:28:56.178419 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:28:56.178430 systemd[1]: Set up automount boot.automount. Jul 15 11:28:56.178440 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:28:56.178450 systemd[1]: Stopped target initrd-switch-root.target. Jul 15 11:28:56.178460 systemd[1]: Stopped target initrd-fs.target. Jul 15 11:28:56.178471 systemd[1]: Stopped target initrd-root-fs.target. Jul 15 11:28:56.178482 systemd[1]: Reached target integritysetup.target. Jul 15 11:28:56.178492 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:28:56.178529 systemd[1]: Reached target remote-fs.target. Jul 15 11:28:56.178540 systemd[1]: Reached target slices.target. Jul 15 11:28:56.178551 systemd[1]: Reached target swap.target. Jul 15 11:28:56.178561 systemd[1]: Reached target torcx.target. Jul 15 11:28:56.178572 systemd[1]: Reached target veritysetup.target. Jul 15 11:28:56.178585 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:28:56.178597 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:28:56.178608 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:28:56.178618 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:28:56.178629 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:28:56.178639 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:28:56.178650 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:28:56.178660 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:28:56.178670 systemd[1]: Mounting media.mount... Jul 15 11:28:56.178681 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:56.178692 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:28:56.178702 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:28:56.178713 systemd[1]: Mounting tmp.mount... Jul 15 11:28:56.178724 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:28:56.178745 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:28:56.178756 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:28:56.178766 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:28:56.178777 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:28:56.178789 systemd[1]: Starting modprobe@drm.service... Jul 15 11:28:56.178802 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:28:56.178833 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:28:56.178849 systemd[1]: Starting modprobe@loop.service... Jul 15 11:28:56.178859 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:28:56.178872 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 11:28:56.178886 systemd[1]: Stopped systemd-fsck-root.service. Jul 15 11:28:56.178899 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 11:28:56.178910 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 11:28:56.178920 kernel: fuse: init (API version 7.34) Jul 15 11:28:56.178929 kernel: loop: module loaded Jul 15 11:28:56.178940 systemd[1]: Stopped systemd-journald.service. Jul 15 11:28:56.178952 systemd[1]: Starting systemd-journald.service... Jul 15 11:28:56.178962 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:28:56.178972 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:28:56.178983 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:28:56.178993 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:28:56.179003 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 11:28:56.179014 systemd[1]: Stopped verity-setup.service. Jul 15 11:28:56.179024 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:56.179037 systemd-journald[990]: Journal started Jul 15 11:28:56.179078 systemd-journald[990]: Runtime Journal (/run/log/journal/ea3ba73fd9f949a1b4a834e28afcd44c) is 6.0M, max 48.4M, 42.4M free. Jul 15 11:28:53.349000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 11:28:53.922000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:28:53.922000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:28:53.929000 audit: BPF prog-id=10 op=LOAD Jul 15 11:28:53.929000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:28:53.931000 audit: BPF prog-id=11 op=LOAD Jul 15 11:28:53.931000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:28:53.958000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 15 11:28:53.958000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b4 a1=c000146de0 a2=c00014f040 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:28:53.958000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:28:53.960000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 15 11:28:53.960000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5989 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:28:53.960000 audit: CWD cwd="/" Jul 15 11:28:53.960000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:53.960000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:53.960000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:28:56.041000 audit: BPF prog-id=12 op=LOAD Jul 15 11:28:56.041000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:28:56.041000 audit: BPF prog-id=13 op=LOAD Jul 15 11:28:56.041000 audit: BPF prog-id=14 op=LOAD Jul 15 11:28:56.041000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:28:56.041000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:28:56.042000 audit: BPF prog-id=15 op=LOAD Jul 15 11:28:56.042000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:28:56.042000 audit: BPF prog-id=16 op=LOAD Jul 15 11:28:56.042000 audit: BPF prog-id=17 op=LOAD Jul 15 11:28:56.042000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:28:56.042000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:28:56.043000 audit: BPF prog-id=18 op=LOAD Jul 15 11:28:56.043000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:28:56.043000 audit: BPF prog-id=19 op=LOAD Jul 15 11:28:56.043000 audit: BPF prog-id=20 op=LOAD Jul 15 11:28:56.043000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:28:56.043000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:28:56.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.055000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:28:56.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.159000 audit: BPF prog-id=21 op=LOAD Jul 15 11:28:56.159000 audit: BPF prog-id=22 op=LOAD Jul 15 11:28:56.159000 audit: BPF prog-id=23 op=LOAD Jul 15 11:28:56.159000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:28:56.159000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:28:56.176000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:28:56.176000 audit[990]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffca7758830 a2=4000 a3=7ffca77588cc items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:28:56.176000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:28:56.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.040638 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:28:53.957175 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:28:56.040650 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:28:53.957386 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:28:56.044269 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 11:28:53.957403 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:28:53.957431 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 15 11:28:53.957441 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 15 11:28:53.957468 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 15 11:28:53.957480 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 15 11:28:53.957675 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 15 11:28:53.957708 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:28:53.957721 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:28:53.958382 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 15 11:28:53.958413 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 15 11:28:53.958429 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 Jul 15 11:28:53.958442 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 15 11:28:53.958460 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 Jul 15 11:28:53.958476 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 15 11:28:55.784390 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:28:55.784968 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:28:55.785103 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:28:55.785259 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:28:55.785320 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 15 11:28:55.785372 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2025-07-15T11:28:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 15 11:28:56.182984 systemd[1]: Started systemd-journald.service. Jul 15 11:28:56.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.183582 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:28:56.184451 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:28:56.185295 systemd[1]: Mounted media.mount. Jul 15 11:28:56.186097 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:28:56.187022 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:28:56.187938 systemd[1]: Mounted tmp.mount. Jul 15 11:28:56.188921 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:28:56.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.190076 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:28:56.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.191154 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:28:56.191339 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:28:56.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.192414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:28:56.192573 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:28:56.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.193758 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:28:56.193953 systemd[1]: Finished modprobe@drm.service. Jul 15 11:28:56.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.195050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:28:56.195198 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:28:56.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.196295 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:28:56.196464 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:28:56.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.197518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:28:56.197617 systemd[1]: Finished modprobe@loop.service. Jul 15 11:28:56.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.198759 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:28:56.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.200049 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:28:56.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.201331 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:28:56.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.202644 systemd[1]: Reached target network-pre.target. Jul 15 11:28:56.204698 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:28:56.206520 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:28:56.207555 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:28:56.209110 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:28:56.210865 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:28:56.212030 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:28:56.213031 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:28:56.215075 systemd-journald[990]: Time spent on flushing to /var/log/journal/ea3ba73fd9f949a1b4a834e28afcd44c is 18.826ms for 1172 entries. Jul 15 11:28:56.215075 systemd-journald[990]: System Journal (/var/log/journal/ea3ba73fd9f949a1b4a834e28afcd44c) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:28:56.251030 systemd-journald[990]: Received client request to flush runtime journal. Jul 15 11:28:56.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.214162 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:28:56.215636 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:28:56.218379 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:28:56.220792 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:28:56.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.253015 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 15 11:28:56.221898 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:28:56.223075 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:28:56.224150 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:28:56.225151 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:28:56.227167 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:28:56.235217 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:28:56.241119 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:28:56.242998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:28:56.251887 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:28:56.264652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:28:56.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.635873 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:28:56.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.637000 audit: BPF prog-id=24 op=LOAD Jul 15 11:28:56.637000 audit: BPF prog-id=25 op=LOAD Jul 15 11:28:56.637000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:28:56.637000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:28:56.638572 systemd[1]: Starting systemd-udevd.service... Jul 15 11:28:56.653681 systemd-udevd[1017]: Using default interface naming scheme 'v252'. Jul 15 11:28:56.665241 systemd[1]: Started systemd-udevd.service. Jul 15 11:28:56.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.667000 audit: BPF prog-id=26 op=LOAD Jul 15 11:28:56.669301 systemd[1]: Starting systemd-networkd.service... Jul 15 11:28:56.675000 audit: BPF prog-id=27 op=LOAD Jul 15 11:28:56.675000 audit: BPF prog-id=28 op=LOAD Jul 15 11:28:56.675000 audit: BPF prog-id=29 op=LOAD Jul 15 11:28:56.676823 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:28:56.698117 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 15 11:28:56.707739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:28:56.712788 systemd[1]: Started systemd-userdbd.service. Jul 15 11:28:56.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.736529 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:28:56.742535 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:28:56.754137 systemd-networkd[1027]: lo: Link UP Jul 15 11:28:56.754151 systemd-networkd[1027]: lo: Gained carrier Jul 15 11:28:56.754681 systemd-networkd[1027]: Enumeration completed Jul 15 11:28:56.754958 systemd[1]: Started systemd-networkd.service. Jul 15 11:28:56.755167 systemd-networkd[1027]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:28:56.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.757259 systemd-networkd[1027]: eth0: Link UP Jul 15 11:28:56.757357 systemd-networkd[1027]: eth0: Gained carrier Jul 15 11:28:56.759000 audit[1042]: AVC avc: denied { confidentiality } for pid=1042 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:28:56.759000 audit[1042]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dcb6feb100 a1=338ac a2=7f79ae242bc5 a3=5 items=110 ppid=1017 pid=1042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:28:56.759000 audit: CWD cwd="/" Jul 15 11:28:56.759000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=1 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=2 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=3 name=(null) inode=10953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=4 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=5 name=(null) inode=10954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=6 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=7 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=8 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=9 name=(null) inode=10956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=10 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=11 name=(null) inode=10957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=12 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=13 name=(null) inode=10958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=14 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=15 name=(null) inode=10959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=16 name=(null) inode=10955 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=17 name=(null) inode=10960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=18 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=19 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=20 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=21 name=(null) inode=10962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=22 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=23 name=(null) inode=10963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=24 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=25 name=(null) inode=10964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=26 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=27 name=(null) inode=10965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=28 name=(null) inode=10961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=29 name=(null) inode=10966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=30 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=31 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=32 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=33 name=(null) inode=10968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=34 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=35 name=(null) inode=10969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=36 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=37 name=(null) inode=10970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=38 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=39 name=(null) inode=10971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=40 name=(null) inode=10967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=41 name=(null) inode=10972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=42 name=(null) inode=10952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=43 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=44 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=45 name=(null) inode=10974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=46 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=47 name=(null) inode=10975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=48 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=49 name=(null) inode=10976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=50 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=51 name=(null) inode=10977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=52 name=(null) inode=10973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=53 name=(null) inode=10978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=55 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=56 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=57 name=(null) inode=10980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=58 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=59 name=(null) inode=10981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=60 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=61 name=(null) inode=10982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=62 name=(null) inode=10982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=63 name=(null) inode=10983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=64 name=(null) inode=10982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=65 name=(null) inode=10984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=66 name=(null) inode=10982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=67 name=(null) inode=10985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=68 name=(null) inode=10982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=69 name=(null) inode=10986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=70 name=(null) inode=10982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=71 name=(null) inode=10987 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=72 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=73 name=(null) inode=10988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=74 name=(null) inode=10988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=75 name=(null) inode=10989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=76 name=(null) inode=10988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=77 name=(null) inode=10990 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=78 name=(null) inode=10988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=79 name=(null) inode=10991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=80 name=(null) inode=10988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=81 name=(null) inode=10992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=82 name=(null) inode=10988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=83 name=(null) inode=10993 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=84 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=85 name=(null) inode=10994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=86 name=(null) inode=10994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=87 name=(null) inode=10995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=88 name=(null) inode=10994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=89 name=(null) inode=10996 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=90 name=(null) inode=10994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=91 name=(null) inode=10997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=92 name=(null) inode=10994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=93 name=(null) inode=10998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=94 name=(null) inode=10994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=95 name=(null) inode=10999 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=96 name=(null) inode=10979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=97 name=(null) inode=11000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=98 name=(null) inode=11000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=99 name=(null) inode=11001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=100 name=(null) inode=11000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=101 name=(null) inode=11002 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=102 name=(null) inode=11000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=103 name=(null) inode=11003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=104 name=(null) inode=11000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=105 name=(null) inode=11004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=106 name=(null) inode=11000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=107 name=(null) inode=11005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PATH item=109 name=(null) inode=11006 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:28:56.759000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:28:56.770397 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 15 11:28:56.773763 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 11:28:56.773877 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 11:28:56.773996 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 11:28:56.774080 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:28:56.774652 systemd-networkd[1027]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:28:56.791521 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:28:56.843949 kernel: kvm: Nested Virtualization enabled Jul 15 11:28:56.844038 kernel: SVM: kvm: Nested Paging enabled Jul 15 11:28:56.844053 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 15 11:28:56.844578 kernel: SVM: Virtual GIF supported Jul 15 11:28:56.860519 kernel: EDAC MC: Ver: 3.0.0 Jul 15 11:28:56.887905 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:28:56.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.889955 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:28:56.896925 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:28:56.922261 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:28:56.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.923191 systemd[1]: Reached target cryptsetup.target. Jul 15 11:28:56.924755 systemd[1]: Starting lvm2-activation.service... Jul 15 11:28:56.929000 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:28:56.953286 systemd[1]: Finished lvm2-activation.service. Jul 15 11:28:56.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:56.954202 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:28:56.955013 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:28:56.955036 systemd[1]: Reached target local-fs.target. Jul 15 11:28:56.955806 systemd[1]: Reached target machines.target. Jul 15 11:28:56.957348 systemd[1]: Starting ldconfig.service... Jul 15 11:28:56.958242 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:28:56.958286 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:56.959064 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:28:56.961114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:28:56.963007 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:28:56.964742 systemd[1]: Starting systemd-sysext.service... Jul 15 11:28:56.966350 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Jul 15 11:28:56.967280 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:28:56.971910 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:28:56.975878 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:28:56.976033 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:28:56.985537 kernel: loop0: detected capacity change from 0 to 229808 Jul 15 11:28:57.007967 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) Jul 15 11:28:57.007967 systemd-fsck[1064]: /dev/vda1: 791 files, 120745/258078 clusters Jul 15 11:28:57.012171 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:28:57.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.013541 systemd[1]: Mounting boot.mount... Jul 15 11:28:57.015406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:28:57.025016 systemd[1]: Mounted boot.mount. Jul 15 11:28:57.277385 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:28:57.278221 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:28:57.279529 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:28:57.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.282347 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:28:57.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.300529 kernel: loop1: detected capacity change from 0 to 229808 Jul 15 11:28:57.305452 (sd-sysext)[1070]: Using extensions 'kubernetes'. Jul 15 11:28:57.305895 (sd-sysext)[1070]: Merged extensions into '/usr'. Jul 15 11:28:57.323078 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.324635 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:28:57.325751 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.326905 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:28:57.328955 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:28:57.330947 systemd[1]: Starting modprobe@loop.service... Jul 15 11:28:57.331877 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.332026 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:57.332168 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.335270 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:28:57.336828 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:28:57.337015 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:28:57.337281 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:28:57.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.338810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:28:57.338958 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:28:57.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.340641 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:28:57.340815 systemd[1]: Finished modprobe@loop.service. Jul 15 11:28:57.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.342479 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:28:57.342642 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.343945 systemd[1]: Finished ldconfig.service. Jul 15 11:28:57.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.345339 systemd[1]: Finished systemd-sysext.service. Jul 15 11:28:57.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.348062 systemd[1]: Starting ensure-sysext.service... Jul 15 11:28:57.350408 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:28:57.355971 systemd[1]: Reloading. Jul 15 11:28:57.358988 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:28:57.359946 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:28:57.361254 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:28:57.403481 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-07-15T11:28:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:28:57.403885 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-07-15T11:28:57Z" level=info msg="torcx already run" Jul 15 11:28:57.475352 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:28:57.475367 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:28:57.493228 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:28:57.545000 audit: BPF prog-id=30 op=LOAD Jul 15 11:28:57.545000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:28:57.546000 audit: BPF prog-id=31 op=LOAD Jul 15 11:28:57.546000 audit: BPF prog-id=32 op=LOAD Jul 15 11:28:57.546000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:28:57.546000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:28:57.547000 audit: BPF prog-id=33 op=LOAD Jul 15 11:28:57.547000 audit: BPF prog-id=27 op=UNLOAD Jul 15 11:28:57.548000 audit: BPF prog-id=34 op=LOAD Jul 15 11:28:57.548000 audit: BPF prog-id=35 op=LOAD Jul 15 11:28:57.548000 audit: BPF prog-id=28 op=UNLOAD Jul 15 11:28:57.548000 audit: BPF prog-id=29 op=UNLOAD Jul 15 11:28:57.548000 audit: BPF prog-id=36 op=LOAD Jul 15 11:28:57.548000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:28:57.548000 audit: BPF prog-id=37 op=LOAD Jul 15 11:28:57.548000 audit: BPF prog-id=38 op=LOAD Jul 15 11:28:57.548000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:28:57.548000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:28:57.551257 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:28:57.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.555947 systemd[1]: Starting audit-rules.service... Jul 15 11:28:57.557849 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:28:57.560165 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:28:57.561000 audit: BPF prog-id=39 op=LOAD Jul 15 11:28:57.562744 systemd[1]: Starting systemd-resolved.service... Jul 15 11:28:57.563000 audit: BPF prog-id=40 op=LOAD Jul 15 11:28:57.564987 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:28:57.567091 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:28:57.568448 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:28:57.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.571000 audit[1144]: SYSTEM_BOOT pid=1144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.576821 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.577038 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.578607 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:28:57.580541 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:28:57.582405 systemd[1]: Starting modprobe@loop.service... Jul 15 11:28:57.583267 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.583422 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:57.583606 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:28:57.583722 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.585216 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:28:57.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.586715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:28:57.586816 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:28:57.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.588059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:28:57.588163 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:28:57.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.589403 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:28:57.589497 systemd[1]: Finished modprobe@loop.service. Jul 15 11:28:57.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.592258 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:28:57.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.594059 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.594228 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.595343 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:28:57.596886 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:28:57.598574 systemd[1]: Starting modprobe@loop.service... Jul 15 11:28:57.599331 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.599421 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:57.600361 systemd[1]: Starting systemd-update-done.service... Jul 15 11:28:57.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.601414 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:28:57.601493 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.602318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:28:57.602414 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:28:57.603686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:28:57.603786 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:28:57.605037 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:28:57.605147 systemd[1]: Finished modprobe@loop.service. Jul 15 11:28:57.606290 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:28:57.606371 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.606681 systemd[1]: Finished systemd-update-done.service. Jul 15 11:28:57.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:28:57.609991 augenrules[1165]: No rules Jul 15 11:28:57.609000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:28:57.609000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff15e17710 a2=420 a3=0 items=0 ppid=1138 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:28:57.609000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:28:57.610273 systemd[1]: Finished audit-rules.service. Jul 15 11:28:57.611430 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.611678 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.612923 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:28:57.614730 systemd[1]: Starting modprobe@drm.service... Jul 15 11:28:57.616458 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:28:57.618078 systemd[1]: Starting modprobe@loop.service... Jul 15 11:28:57.619139 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:28:57.619237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:57.620140 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:28:57.621261 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:28:57.621345 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:28:57.622231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:28:57.622330 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:28:57.623690 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:28:57.623790 systemd[1]: Finished modprobe@drm.service. Jul 15 11:28:57.624676 systemd-resolved[1141]: Positive Trust Anchors: Jul 15 11:28:57.624685 systemd-resolved[1141]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:28:57.624720 systemd-resolved[1141]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:28:57.625037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:28:57.625127 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:28:57.626459 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:28:58.342563 systemd-timesyncd[1142]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:28:58.342608 systemd-timesyncd[1142]: Initial clock synchronization to Tue 2025-07-15 11:28:58.342501 UTC. Jul 15 11:28:58.342940 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:28:58.343036 systemd[1]: Finished modprobe@loop.service. Jul 15 11:28:58.344647 systemd[1]: Reached target time-set.target. Jul 15 11:28:58.345644 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:28:58.345675 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:28:58.345904 systemd[1]: Finished ensure-sysext.service. Jul 15 11:28:58.346934 systemd-resolved[1141]: Defaulting to hostname 'linux'. Jul 15 11:28:58.348178 systemd[1]: Started systemd-resolved.service. Jul 15 11:28:58.349228 systemd[1]: Reached target network.target. Jul 15 11:28:58.350057 systemd[1]: Reached target nss-lookup.target. Jul 15 11:28:58.350942 systemd[1]: Reached target sysinit.target. Jul 15 11:28:58.351872 systemd[1]: Started motdgen.path. Jul 15 11:28:58.352621 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:28:58.353845 systemd[1]: Started logrotate.timer. Jul 15 11:28:58.354673 systemd[1]: Started mdadm.timer. Jul 15 11:28:58.355371 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:28:58.356230 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:28:58.356262 systemd[1]: Reached target paths.target. Jul 15 11:28:58.357026 systemd[1]: Reached target timers.target. Jul 15 11:28:58.358132 systemd[1]: Listening on dbus.socket. Jul 15 11:28:58.359806 systemd[1]: Starting docker.socket... Jul 15 11:28:58.362530 systemd[1]: Listening on sshd.socket. Jul 15 11:28:58.363401 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:58.363738 systemd[1]: Listening on docker.socket. Jul 15 11:28:58.364589 systemd[1]: Reached target sockets.target. Jul 15 11:28:58.365385 systemd[1]: Reached target basic.target. Jul 15 11:28:58.366184 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:28:58.366205 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:28:58.366990 systemd[1]: Starting containerd.service... Jul 15 11:28:58.368538 systemd[1]: Starting dbus.service... Jul 15 11:28:58.370022 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:28:58.371635 systemd[1]: Starting extend-filesystems.service... Jul 15 11:28:58.372714 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:28:58.374034 jq[1180]: false Jul 15 11:28:58.373596 systemd[1]: Starting motdgen.service... Jul 15 11:28:58.375124 systemd[1]: Starting prepare-helm.service... Jul 15 11:28:58.376729 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:28:58.378655 systemd[1]: Starting sshd-keygen.service... Jul 15 11:28:58.381379 systemd[1]: Starting systemd-logind.service... Jul 15 11:28:58.382586 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:28:58.382635 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:28:58.384221 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 11:28:58.384905 systemd[1]: Starting update-engine.service... Jul 15 11:28:58.385949 extend-filesystems[1181]: Found loop1 Jul 15 11:28:58.386844 extend-filesystems[1181]: Found sr0 Jul 15 11:28:58.386844 extend-filesystems[1181]: Found vda Jul 15 11:28:58.386844 extend-filesystems[1181]: Found vda1 Jul 15 11:28:58.386844 extend-filesystems[1181]: Found vda2 Jul 15 11:28:58.386844 extend-filesystems[1181]: Found vda3 Jul 15 11:28:58.386844 extend-filesystems[1181]: Found usr Jul 15 11:28:58.386844 extend-filesystems[1181]: Found vda4 Jul 15 11:28:58.399788 extend-filesystems[1181]: Found vda6 Jul 15 11:28:58.399788 extend-filesystems[1181]: Found vda7 Jul 15 11:28:58.399788 extend-filesystems[1181]: Found vda9 Jul 15 11:28:58.399788 extend-filesystems[1181]: Checking size of /dev/vda9 Jul 15 11:28:58.387972 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:28:58.406593 jq[1199]: true Jul 15 11:28:58.399515 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:28:58.400577 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:28:58.400823 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:28:58.400955 systemd[1]: Finished motdgen.service. Jul 15 11:28:58.405711 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:28:58.405861 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:28:58.411829 jq[1206]: true Jul 15 11:28:58.412956 dbus-daemon[1179]: [system] SELinux support is enabled Jul 15 11:28:58.413086 systemd[1]: Started dbus.service. Jul 15 11:28:58.413386 extend-filesystems[1181]: Resized partition /dev/vda9 Jul 15 11:28:58.416130 extend-filesystems[1210]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:28:58.418145 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:28:58.418167 systemd[1]: Reached target system-config.target. Jul 15 11:28:58.420518 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:28:58.420264 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:28:58.420278 systemd[1]: Reached target user-config.target. Jul 15 11:28:58.431184 tar[1205]: linux-amd64/LICENSE Jul 15 11:28:58.431184 tar[1205]: linux-amd64/helm Jul 15 11:28:58.432418 systemd-logind[1191]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:28:58.432436 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:28:58.433084 systemd-logind[1191]: New seat seat0. Jul 15 11:28:58.437575 systemd[1]: Started systemd-logind.service. Jul 15 11:28:58.441194 update_engine[1196]: I0715 11:28:58.440912 1196 main.cc:92] Flatcar Update Engine starting Jul 15 11:28:58.446808 update_engine[1196]: I0715 11:28:58.445844 1196 update_check_scheduler.cc:74] Next update check in 2m17s Jul 15 11:28:58.446805 systemd[1]: Started update-engine.service. Jul 15 11:28:58.448478 env[1207]: time="2025-07-15T11:28:58.448424719Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:28:58.452520 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:28:58.452334 systemd[1]: Started locksmithd.service. Jul 15 11:28:58.501838 env[1207]: time="2025-07-15T11:28:58.472490965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:28:58.501838 env[1207]: time="2025-07-15T11:28:58.501584374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:28:58.503492 extend-filesystems[1210]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:28:58.503492 extend-filesystems[1210]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:28:58.503492 extend-filesystems[1210]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.502923986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.502969842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503271096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503292907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503309649Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503322042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503446616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503804246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.503985947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:28:58.508086 env[1207]: time="2025-07-15T11:28:58.504023567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:28:58.503900 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:28:58.508401 extend-filesystems[1181]: Resized filesystem in /dev/vda9 Jul 15 11:28:58.509527 env[1207]: time="2025-07-15T11:28:58.504095442Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:28:58.509527 env[1207]: time="2025-07-15T11:28:58.504115239Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:28:58.504368 systemd[1]: Finished extend-filesystems.service. Jul 15 11:28:58.522489 locksmithd[1234]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:28:58.532865 bash[1230]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:28:58.533734 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534556055Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534614023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534627639Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534658857Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534673004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534685838Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534696738Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534708921Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534721785Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534746652Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534759105Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534770467Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:28:58.534926 env[1207]: time="2025-07-15T11:28:58.534899148Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.534961565Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.535186286Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.535207205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.535221552Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.535276074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.535287736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535294 env[1207]: time="2025-07-15T11:28:58.535299759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535488 env[1207]: time="2025-07-15T11:28:58.535310860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535488 env[1207]: time="2025-07-15T11:28:58.535321910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535488 env[1207]: time="2025-07-15T11:28:58.535350774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535488 env[1207]: time="2025-07-15T11:28:58.535361214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535488 env[1207]: time="2025-07-15T11:28:58.535371233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535488 env[1207]: time="2025-07-15T11:28:58.535382434Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535493101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535506857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535517567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535529019Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535543335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535553164Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535570637Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:28:58.535643 env[1207]: time="2025-07-15T11:28:58.535603358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:28:58.535850 env[1207]: time="2025-07-15T11:28:58.535799486Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:28:58.536635 env[1207]: time="2025-07-15T11:28:58.535857424Z" level=info msg="Connect containerd service" Jul 15 11:28:58.536635 env[1207]: time="2025-07-15T11:28:58.535890717Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:28:58.536635 env[1207]: time="2025-07-15T11:28:58.536384142Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:28:58.536635 env[1207]: time="2025-07-15T11:28:58.536625173Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:28:58.537623 env[1207]: time="2025-07-15T11:28:58.536651803Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:28:58.537623 env[1207]: time="2025-07-15T11:28:58.536695385Z" level=info msg="containerd successfully booted in 0.092103s" Jul 15 11:28:58.536752 systemd[1]: Started containerd.service. Jul 15 11:28:58.538008 env[1207]: time="2025-07-15T11:28:58.537965907Z" level=info msg="Start subscribing containerd event" Jul 15 11:28:58.538047 env[1207]: time="2025-07-15T11:28:58.538009990Z" level=info msg="Start recovering state" Jul 15 11:28:58.538075 env[1207]: time="2025-07-15T11:28:58.538055795Z" level=info msg="Start event monitor" Jul 15 11:28:58.538101 env[1207]: time="2025-07-15T11:28:58.538078378Z" level=info msg="Start snapshots syncer" Jul 15 11:28:58.538101 env[1207]: time="2025-07-15T11:28:58.538087585Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:28:58.538101 env[1207]: time="2025-07-15T11:28:58.538093877Z" level=info msg="Start streaming server" Jul 15 11:28:58.867445 systemd-networkd[1027]: eth0: Gained IPv6LL Jul 15 11:28:58.869694 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:28:58.871152 systemd[1]: Reached target network-online.target. Jul 15 11:28:58.873705 systemd[1]: Starting kubelet.service... Jul 15 11:28:58.876035 tar[1205]: linux-amd64/README.md Jul 15 11:28:58.880414 systemd[1]: Finished prepare-helm.service. Jul 15 11:28:59.121709 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:28:59.141013 systemd[1]: Finished sshd-keygen.service. Jul 15 11:28:59.143636 systemd[1]: Starting issuegen.service... Jul 15 11:28:59.148858 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:28:59.149034 systemd[1]: Finished issuegen.service. Jul 15 11:28:59.151480 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:28:59.157126 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:28:59.160312 systemd[1]: Started getty@tty1.service. Jul 15 11:28:59.162348 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:28:59.163441 systemd[1]: Reached target getty.target. Jul 15 11:28:59.548840 systemd[1]: Started kubelet.service. Jul 15 11:28:59.550234 systemd[1]: Reached target multi-user.target. Jul 15 11:28:59.552282 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:28:59.558109 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:28:59.558284 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:28:59.559337 systemd[1]: Startup finished in 588ms (kernel) + 4.620s (initrd) + 5.533s (userspace) = 10.742s. Jul 15 11:28:59.964025 kubelet[1261]: E0715 11:28:59.963899 1261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:28:59.965696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:28:59.965802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:29:01.697974 systemd[1]: Created slice system-sshd.slice. Jul 15 11:29:01.698838 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:40004.service. Jul 15 11:29:01.739091 sshd[1270]: Accepted publickey for core from 10.0.0.1 port 40004 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:29:01.740289 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:01.748146 systemd-logind[1191]: New session 1 of user core. Jul 15 11:29:01.749035 systemd[1]: Created slice user-500.slice. Jul 15 11:29:01.750006 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:29:01.757232 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:29:01.758394 systemd[1]: Starting user@500.service... Jul 15 11:29:01.760945 (systemd)[1273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:01.829953 systemd[1273]: Queued start job for default target default.target. Jul 15 11:29:01.830329 systemd[1273]: Reached target paths.target. Jul 15 11:29:01.830356 systemd[1273]: Reached target sockets.target. Jul 15 11:29:01.830371 systemd[1273]: Reached target timers.target. Jul 15 11:29:01.830384 systemd[1273]: Reached target basic.target. Jul 15 11:29:01.830424 systemd[1273]: Reached target default.target. Jul 15 11:29:01.830446 systemd[1273]: Startup finished in 64ms. Jul 15 11:29:01.830506 systemd[1]: Started user@500.service. Jul 15 11:29:01.831514 systemd[1]: Started session-1.scope. Jul 15 11:29:01.881239 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:40010.service. Jul 15 11:29:01.923138 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 40010 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:29:01.924193 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:01.928203 systemd-logind[1191]: New session 2 of user core. Jul 15 11:29:01.929170 systemd[1]: Started session-2.scope. Jul 15 11:29:01.981377 sshd[1282]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:01.983688 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:40010.service: Deactivated successfully. Jul 15 11:29:01.984161 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:29:01.984621 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:29:01.985431 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:40022.service. Jul 15 11:29:01.985980 systemd-logind[1191]: Removed session 2. Jul 15 11:29:02.023063 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 40022 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:29:02.024039 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:02.027278 systemd-logind[1191]: New session 3 of user core. Jul 15 11:29:02.028284 systemd[1]: Started session-3.scope. Jul 15 11:29:02.078009 sshd[1288]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:02.080654 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:40022.service: Deactivated successfully. Jul 15 11:29:02.081163 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:29:02.081648 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:29:02.082750 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:40038.service. Jul 15 11:29:02.083346 systemd-logind[1191]: Removed session 3. Jul 15 11:29:02.121308 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 40038 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:29:02.122265 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:02.125279 systemd-logind[1191]: New session 4 of user core. Jul 15 11:29:02.126019 systemd[1]: Started session-4.scope. Jul 15 11:29:02.179221 sshd[1294]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:02.181986 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:40038.service: Deactivated successfully. Jul 15 11:29:02.182486 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:29:02.182967 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:29:02.183969 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:40040.service. Jul 15 11:29:02.184743 systemd-logind[1191]: Removed session 4. Jul 15 11:29:02.223183 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 40040 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:29:02.224345 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:02.227861 systemd-logind[1191]: New session 5 of user core. Jul 15 11:29:02.228734 systemd[1]: Started session-5.scope. Jul 15 11:29:02.284272 sudo[1303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:29:02.284496 sudo[1303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:29:02.308411 systemd[1]: Starting docker.service... Jul 15 11:29:02.359112 env[1316]: time="2025-07-15T11:29:02.359049484Z" level=info msg="Starting up" Jul 15 11:29:02.360398 env[1316]: time="2025-07-15T11:29:02.360358658Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:29:02.360398 env[1316]: time="2025-07-15T11:29:02.360386300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:29:02.360486 env[1316]: time="2025-07-15T11:29:02.360419903Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:29:02.360486 env[1316]: time="2025-07-15T11:29:02.360433338Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:29:02.362009 env[1316]: time="2025-07-15T11:29:02.361979657Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:29:02.362009 env[1316]: time="2025-07-15T11:29:02.361996929Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:29:02.362009 env[1316]: time="2025-07-15T11:29:02.362007108Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:29:02.362101 env[1316]: time="2025-07-15T11:29:02.362014472Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:29:02.945859 env[1316]: time="2025-07-15T11:29:02.945796695Z" level=info msg="Loading containers: start." Jul 15 11:29:03.637273 kernel: Initializing XFRM netlink socket Jul 15 11:29:03.664135 env[1316]: time="2025-07-15T11:29:03.664097262Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:29:03.711138 systemd-networkd[1027]: docker0: Link UP Jul 15 11:29:03.982441 env[1316]: time="2025-07-15T11:29:03.982329815Z" level=info msg="Loading containers: done." Jul 15 11:29:03.993437 env[1316]: time="2025-07-15T11:29:03.993391692Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:29:03.993582 env[1316]: time="2025-07-15T11:29:03.993555349Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:29:03.993638 env[1316]: time="2025-07-15T11:29:03.993625741Z" level=info msg="Daemon has completed initialization" Jul 15 11:29:04.011370 systemd[1]: Started docker.service. Jul 15 11:29:04.014742 env[1316]: time="2025-07-15T11:29:04.014695619Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:29:04.572826 env[1207]: time="2025-07-15T11:29:04.572770961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 15 11:29:05.853612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158711132.mount: Deactivated successfully. Jul 15 11:29:07.546269 env[1207]: time="2025-07-15T11:29:07.546175092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:07.548069 env[1207]: time="2025-07-15T11:29:07.548042322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:07.549805 env[1207]: time="2025-07-15T11:29:07.549759952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:07.551285 env[1207]: time="2025-07-15T11:29:07.551263220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:07.551895 env[1207]: time="2025-07-15T11:29:07.551865159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 15 11:29:07.552374 env[1207]: time="2025-07-15T11:29:07.552341351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 15 11:29:10.019338 env[1207]: time="2025-07-15T11:29:10.019281423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:10.022103 env[1207]: time="2025-07-15T11:29:10.022060763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:10.024447 env[1207]: time="2025-07-15T11:29:10.024416910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:10.025937 env[1207]: time="2025-07-15T11:29:10.025899670Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:10.026668 env[1207]: time="2025-07-15T11:29:10.026637082Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 15 11:29:10.027157 env[1207]: time="2025-07-15T11:29:10.027132681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 15 11:29:10.192596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:29:10.192757 systemd[1]: Stopped kubelet.service. Jul 15 11:29:10.193960 systemd[1]: Starting kubelet.service... Jul 15 11:29:10.277215 systemd[1]: Started kubelet.service. Jul 15 11:29:10.312453 kubelet[1450]: E0715 11:29:10.312392 1450 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:29:10.315350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:29:10.315498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:29:12.723755 env[1207]: time="2025-07-15T11:29:12.723698429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:12.726818 env[1207]: time="2025-07-15T11:29:12.726745471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:12.729233 env[1207]: time="2025-07-15T11:29:12.729175266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:12.731464 env[1207]: time="2025-07-15T11:29:12.731416177Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:12.731958 env[1207]: time="2025-07-15T11:29:12.731917186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 15 11:29:12.732532 env[1207]: time="2025-07-15T11:29:12.732482446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 15 11:29:15.842738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198382898.mount: Deactivated successfully. Jul 15 11:29:17.553836 env[1207]: time="2025-07-15T11:29:17.553768535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.837112 env[1207]: time="2025-07-15T11:29:17.836950610Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.875308 env[1207]: time="2025-07-15T11:29:17.875219631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.887701 env[1207]: time="2025-07-15T11:29:17.887639133Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:17.888027 env[1207]: time="2025-07-15T11:29:17.887979391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 15 11:29:17.888700 env[1207]: time="2025-07-15T11:29:17.888655098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 11:29:18.670746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419122245.mount: Deactivated successfully. Jul 15 11:29:19.937084 env[1207]: time="2025-07-15T11:29:19.937024539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.939147 env[1207]: time="2025-07-15T11:29:19.939124836Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.941003 env[1207]: time="2025-07-15T11:29:19.940976217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.942724 env[1207]: time="2025-07-15T11:29:19.942697243Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:19.943530 env[1207]: time="2025-07-15T11:29:19.943500329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 15 11:29:19.943980 env[1207]: time="2025-07-15T11:29:19.943963327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:29:20.366999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 11:29:20.367162 systemd[1]: Stopped kubelet.service. Jul 15 11:29:20.368662 systemd[1]: Starting kubelet.service... Jul 15 11:29:20.375754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728303503.mount: Deactivated successfully. Jul 15 11:29:20.458210 systemd[1]: Started kubelet.service. Jul 15 11:29:20.595440 kubelet[1463]: E0715 11:29:20.595366 1463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:29:20.597236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:29:20.597395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:29:20.632568 env[1207]: time="2025-07-15T11:29:20.632426389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.634413 env[1207]: time="2025-07-15T11:29:20.634377587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.636076 env[1207]: time="2025-07-15T11:29:20.636031698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.638281 env[1207]: time="2025-07-15T11:29:20.637540797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:20.640197 env[1207]: time="2025-07-15T11:29:20.640146812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:29:20.640718 env[1207]: time="2025-07-15T11:29:20.640694379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 11:29:21.115462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4258332220.mount: Deactivated successfully. Jul 15 11:29:24.614443 env[1207]: time="2025-07-15T11:29:24.614372366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:24.616887 env[1207]: time="2025-07-15T11:29:24.616844199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:24.618994 env[1207]: time="2025-07-15T11:29:24.618948825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:24.620848 env[1207]: time="2025-07-15T11:29:24.620805836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:24.621521 env[1207]: time="2025-07-15T11:29:24.621488957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 15 11:29:28.148432 systemd[1]: Stopped kubelet.service. Jul 15 11:29:28.150728 systemd[1]: Starting kubelet.service... Jul 15 11:29:28.170476 systemd[1]: Reloading. Jul 15 11:29:28.246642 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-07-15T11:29:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:29:28.246665 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-07-15T11:29:28Z" level=info msg="torcx already run" Jul 15 11:29:28.881487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:29:28.881503 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:29:28.898225 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:29:28.972023 systemd[1]: Started kubelet.service. Jul 15 11:29:28.973523 systemd[1]: Stopping kubelet.service... Jul 15 11:29:28.973904 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:29:28.974111 systemd[1]: Stopped kubelet.service. Jul 15 11:29:28.975893 systemd[1]: Starting kubelet.service... Jul 15 11:29:29.062503 systemd[1]: Started kubelet.service. Jul 15 11:29:29.109886 kubelet[1568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:29.109886 kubelet[1568]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:29:29.109886 kubelet[1568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:29.110270 kubelet[1568]: I0715 11:29:29.109927 1568 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:29:29.649379 kubelet[1568]: I0715 11:29:29.649325 1568 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 11:29:29.649379 kubelet[1568]: I0715 11:29:29.649358 1568 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:29:29.649615 kubelet[1568]: I0715 11:29:29.649593 1568 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 11:29:30.038042 kubelet[1568]: E0715 11:29:30.037955 1568 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 11:29:30.038635 kubelet[1568]: I0715 11:29:30.038597 1568 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:29:30.051587 kubelet[1568]: E0715 11:29:30.051521 1568 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:29:30.051587 kubelet[1568]: I0715 11:29:30.051568 1568 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:29:30.055712 kubelet[1568]: I0715 11:29:30.055673 1568 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:29:30.055926 kubelet[1568]: I0715 11:29:30.055890 1568 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:29:30.056108 kubelet[1568]: I0715 11:29:30.055918 1568 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:29:30.056234 kubelet[1568]: I0715 11:29:30.056110 1568 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:29:30.056234 kubelet[1568]: I0715 11:29:30.056122 1568 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 11:29:30.056328 kubelet[1568]: I0715 11:29:30.056301 1568 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:30.059267 kubelet[1568]: I0715 11:29:30.059221 1568 kubelet.go:480] "Attempting to sync node with API server" Jul 15 11:29:30.059267 kubelet[1568]: I0715 11:29:30.059260 1568 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:29:30.059383 kubelet[1568]: I0715 11:29:30.059300 1568 kubelet.go:386] "Adding apiserver pod source" Jul 15 11:29:30.061280 kubelet[1568]: I0715 11:29:30.061260 1568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:29:30.165520 kubelet[1568]: E0715 11:29:30.165478 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 11:29:30.165520 kubelet[1568]: E0715 11:29:30.165502 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 11:29:30.168052 kubelet[1568]: I0715 11:29:30.168031 1568 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:29:30.168535 kubelet[1568]: I0715 11:29:30.168511 1568 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 11:29:30.169157 kubelet[1568]: W0715 11:29:30.169135 1568 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:29:30.174294 kubelet[1568]: I0715 11:29:30.174268 1568 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:29:30.174428 kubelet[1568]: I0715 11:29:30.174319 1568 server.go:1289] "Started kubelet" Jul 15 11:29:30.175272 kubelet[1568]: I0715 11:29:30.175190 1568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:29:30.175997 kubelet[1568]: I0715 11:29:30.175977 1568 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:29:30.176135 kubelet[1568]: I0715 11:29:30.176078 1568 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:29:30.177737 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 15 11:29:30.178460 kubelet[1568]: I0715 11:29:30.178437 1568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:29:30.180014 kubelet[1568]: I0715 11:29:30.179991 1568 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:29:30.180378 kubelet[1568]: E0715 11:29:30.179338 1568 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18526949ae0ee8c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:29:30.174286018 +0000 UTC m=+1.108566748,LastTimestamp:2025-07-15 11:29:30.174286018 +0000 UTC m=+1.108566748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:29:30.180572 kubelet[1568]: I0715 11:29:30.180026 1568 server.go:317] "Adding debug handlers to kubelet server" Jul 15 11:29:30.182055 kubelet[1568]: E0715 11:29:30.182033 1568 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:29:30.182120 kubelet[1568]: E0715 11:29:30.182096 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:30.182120 kubelet[1568]: I0715 11:29:30.182117 1568 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:29:30.182502 kubelet[1568]: I0715 11:29:30.182481 1568 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:29:30.182568 kubelet[1568]: I0715 11:29:30.182557 1568 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:29:30.183094 kubelet[1568]: E0715 11:29:30.183068 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 11:29:30.183162 kubelet[1568]: E0715 11:29:30.183092 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Jul 15 11:29:30.183635 kubelet[1568]: I0715 11:29:30.183600 1568 factory.go:223] Registration of the systemd container factory successfully Jul 15 11:29:30.183695 kubelet[1568]: I0715 11:29:30.183677 1568 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:29:30.184558 kubelet[1568]: I0715 11:29:30.184528 1568 factory.go:223] Registration of the containerd container factory successfully Jul 15 11:29:30.195625 kubelet[1568]: I0715 11:29:30.195587 1568 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:29:30.195625 kubelet[1568]: I0715 11:29:30.195609 1568 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:29:30.195625 kubelet[1568]: I0715 11:29:30.195626 1568 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:30.196779 kubelet[1568]: I0715 11:29:30.196745 1568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 11:29:30.197951 kubelet[1568]: I0715 11:29:30.197927 1568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 11:29:30.198023 kubelet[1568]: I0715 11:29:30.197958 1568 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 11:29:30.198023 kubelet[1568]: I0715 11:29:30.197981 1568 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:29:30.198023 kubelet[1568]: I0715 11:29:30.197997 1568 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 11:29:30.198160 kubelet[1568]: E0715 11:29:30.198054 1568 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:29:30.198779 kubelet[1568]: E0715 11:29:30.198727 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 11:29:30.282888 kubelet[1568]: E0715 11:29:30.282819 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:30.290934 kubelet[1568]: I0715 11:29:30.290826 1568 policy_none.go:49] "None policy: Start" Jul 15 11:29:30.290934 kubelet[1568]: I0715 11:29:30.290856 1568 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:29:30.290934 kubelet[1568]: I0715 11:29:30.290870 1568 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:29:30.299087 kubelet[1568]: E0715 11:29:30.299027 1568 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:29:30.333928 systemd[1]: Created slice kubepods.slice. Jul 15 11:29:30.338536 systemd[1]: Created slice kubepods-burstable.slice. Jul 15 11:29:30.341082 systemd[1]: Created slice kubepods-besteffort.slice. Jul 15 11:29:30.353390 kubelet[1568]: E0715 11:29:30.353342 1568 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 11:29:30.353578 kubelet[1568]: I0715 11:29:30.353570 1568 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:29:30.353622 kubelet[1568]: I0715 11:29:30.353587 1568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:29:30.354205 kubelet[1568]: I0715 11:29:30.353845 1568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:29:30.354942 kubelet[1568]: E0715 11:29:30.354916 1568 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:29:30.355018 kubelet[1568]: E0715 11:29:30.354966 1568 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:29:30.383732 kubelet[1568]: E0715 11:29:30.383678 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Jul 15 11:29:30.455265 kubelet[1568]: I0715 11:29:30.455213 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:29:30.455629 kubelet[1568]: E0715 11:29:30.455604 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 15 11:29:30.509993 systemd[1]: Created slice kubepods-burstable-podd700bfe3d9b53d903c613b93d056ecab.slice. Jul 15 11:29:30.518545 kubelet[1568]: E0715 11:29:30.518504 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:30.521341 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 15 11:29:30.524019 kubelet[1568]: E0715 11:29:30.523996 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:30.524244 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 15 11:29:30.525605 kubelet[1568]: E0715 11:29:30.525588 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:30.584844 kubelet[1568]: I0715 11:29:30.584258 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:30.584844 kubelet[1568]: I0715 11:29:30.584322 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:30.584844 kubelet[1568]: I0715 11:29:30.584433 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d700bfe3d9b53d903c613b93d056ecab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d700bfe3d9b53d903c613b93d056ecab\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:30.584844 kubelet[1568]: I0715 11:29:30.584484 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d700bfe3d9b53d903c613b93d056ecab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d700bfe3d9b53d903c613b93d056ecab\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:30.584844 kubelet[1568]: I0715 11:29:30.584528 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:30.585103 kubelet[1568]: I0715 11:29:30.584566 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:30.585103 kubelet[1568]: I0715 11:29:30.584889 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:30.585103 kubelet[1568]: I0715 11:29:30.584909 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d700bfe3d9b53d903c613b93d056ecab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d700bfe3d9b53d903c613b93d056ecab\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:30.585103 kubelet[1568]: I0715 11:29:30.584923 1568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:30.658083 kubelet[1568]: I0715 11:29:30.658044 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:29:30.658424 kubelet[1568]: E0715 11:29:30.658399 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 15 11:29:30.785220 kubelet[1568]: E0715 11:29:30.785170 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Jul 15 11:29:30.819720 kubelet[1568]: E0715 11:29:30.819676 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:30.820555 env[1207]: time="2025-07-15T11:29:30.820475402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d700bfe3d9b53d903c613b93d056ecab,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:30.824847 kubelet[1568]: E0715 11:29:30.824805 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:30.825347 env[1207]: time="2025-07-15T11:29:30.825305277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:30.826496 kubelet[1568]: E0715 11:29:30.826478 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:30.826752 env[1207]: time="2025-07-15T11:29:30.826729947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:31.059517 kubelet[1568]: I0715 11:29:31.059480 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:29:31.059883 kubelet[1568]: E0715 11:29:31.059839 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 15 11:29:31.514381 kubelet[1568]: E0715 11:29:31.514159 1568 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18526949ae0ee8c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:29:30.174286018 +0000 UTC m=+1.108566748,LastTimestamp:2025-07-15 11:29:30.174286018 +0000 UTC m=+1.108566748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:29:31.526954 kubelet[1568]: E0715 11:29:31.526902 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 11:29:31.530662 kubelet[1568]: E0715 11:29:31.530634 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 11:29:31.537202 kubelet[1568]: E0715 11:29:31.537172 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 11:29:31.586139 kubelet[1568]: E0715 11:29:31.586094 1568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Jul 15 11:29:31.606731 kubelet[1568]: E0715 11:29:31.606670 1568 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 11:29:31.696334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581936074.mount: Deactivated successfully. Jul 15 11:29:31.702824 env[1207]: time="2025-07-15T11:29:31.702763130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.703696 env[1207]: time="2025-07-15T11:29:31.703655679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.706272 env[1207]: time="2025-07-15T11:29:31.706196783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.707239 env[1207]: time="2025-07-15T11:29:31.707209324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.708200 env[1207]: time="2025-07-15T11:29:31.708168221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.710115 env[1207]: time="2025-07-15T11:29:31.710073672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.711282 env[1207]: time="2025-07-15T11:29:31.711240900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.712643 env[1207]: time="2025-07-15T11:29:31.712601912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.713986 env[1207]: time="2025-07-15T11:29:31.713953295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.716675 env[1207]: time="2025-07-15T11:29:31.716634068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.718285 env[1207]: time="2025-07-15T11:29:31.718139729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.718927 env[1207]: time="2025-07-15T11:29:31.718896156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:31.741355 env[1207]: time="2025-07-15T11:29:31.741269402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:31.741355 env[1207]: time="2025-07-15T11:29:31.741306384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:31.741355 env[1207]: time="2025-07-15T11:29:31.741315542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:31.741716 env[1207]: time="2025-07-15T11:29:31.741684783Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc9c54def233bb56553f50c82420d603fc4ca6308b903dd8d10ae3c30039be1e pid=1615 runtime=io.containerd.runc.v2 Jul 15 11:29:31.754235 systemd[1]: Started cri-containerd-dc9c54def233bb56553f50c82420d603fc4ca6308b903dd8d10ae3c30039be1e.scope. Jul 15 11:29:31.762207 env[1207]: time="2025-07-15T11:29:31.762118324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:31.762207 env[1207]: time="2025-07-15T11:29:31.762160194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:31.762207 env[1207]: time="2025-07-15T11:29:31.762170423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:31.762839 env[1207]: time="2025-07-15T11:29:31.762794526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc2d02ad400b406766ab3e4d43ab151fc777ec2413f18fbdb51b26ebfdbaaa8c pid=1647 runtime=io.containerd.runc.v2 Jul 15 11:29:31.767518 env[1207]: time="2025-07-15T11:29:31.767350201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:31.767518 env[1207]: time="2025-07-15T11:29:31.767386962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:31.767808 env[1207]: time="2025-07-15T11:29:31.767400027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:31.767808 env[1207]: time="2025-07-15T11:29:31.767558312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6b0a7d3431e4e93f29831b8e4496cc9a31ed6544245e9f68760368822b6528 pid=1661 runtime=io.containerd.runc.v2 Jul 15 11:29:31.776947 systemd[1]: Started cri-containerd-bc2d02ad400b406766ab3e4d43ab151fc777ec2413f18fbdb51b26ebfdbaaa8c.scope. Jul 15 11:29:31.782833 systemd[1]: Started cri-containerd-0a6b0a7d3431e4e93f29831b8e4496cc9a31ed6544245e9f68760368822b6528.scope. Jul 15 11:29:31.803200 env[1207]: time="2025-07-15T11:29:31.803150883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc9c54def233bb56553f50c82420d603fc4ca6308b903dd8d10ae3c30039be1e\"" Jul 15 11:29:31.804566 kubelet[1568]: E0715 11:29:31.804530 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:31.809464 env[1207]: time="2025-07-15T11:29:31.809417955Z" level=info msg="CreateContainer within sandbox \"dc9c54def233bb56553f50c82420d603fc4ca6308b903dd8d10ae3c30039be1e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:29:31.817145 env[1207]: time="2025-07-15T11:29:31.817101655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d700bfe3d9b53d903c613b93d056ecab,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc2d02ad400b406766ab3e4d43ab151fc777ec2413f18fbdb51b26ebfdbaaa8c\"" Jul 15 11:29:31.818076 kubelet[1568]: E0715 11:29:31.818042 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:31.825036 env[1207]: time="2025-07-15T11:29:31.824972907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6b0a7d3431e4e93f29831b8e4496cc9a31ed6544245e9f68760368822b6528\"" Jul 15 11:29:31.826212 env[1207]: time="2025-07-15T11:29:31.826189200Z" level=info msg="CreateContainer within sandbox \"bc2d02ad400b406766ab3e4d43ab151fc777ec2413f18fbdb51b26ebfdbaaa8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:29:31.826300 kubelet[1568]: E0715 11:29:31.826263 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:31.862069 kubelet[1568]: I0715 11:29:31.862026 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:29:31.862484 kubelet[1568]: E0715 11:29:31.862441 1568 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Jul 15 11:29:31.951506 env[1207]: time="2025-07-15T11:29:31.951430518Z" level=info msg="CreateContainer within sandbox \"0a6b0a7d3431e4e93f29831b8e4496cc9a31ed6544245e9f68760368822b6528\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:29:31.954233 env[1207]: time="2025-07-15T11:29:31.954194122Z" level=info msg="CreateContainer within sandbox \"dc9c54def233bb56553f50c82420d603fc4ca6308b903dd8d10ae3c30039be1e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d743ca8d4f81c7b349a6b5c06fc4c4d341f05ee3154f0cb34f841a51150f5df6\"" Jul 15 11:29:31.954806 env[1207]: time="2025-07-15T11:29:31.954770201Z" level=info msg="StartContainer for \"d743ca8d4f81c7b349a6b5c06fc4c4d341f05ee3154f0cb34f841a51150f5df6\"" Jul 15 11:29:31.968399 systemd[1]: Started cri-containerd-d743ca8d4f81c7b349a6b5c06fc4c4d341f05ee3154f0cb34f841a51150f5df6.scope. Jul 15 11:29:31.977232 env[1207]: time="2025-07-15T11:29:31.977172495Z" level=info msg="CreateContainer within sandbox \"0a6b0a7d3431e4e93f29831b8e4496cc9a31ed6544245e9f68760368822b6528\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c875316d2632092c50d4a960c72499be40c6f6af94a431d739704a90b80110ae\"" Jul 15 11:29:31.977707 env[1207]: time="2025-07-15T11:29:31.977686245Z" level=info msg="StartContainer for \"c875316d2632092c50d4a960c72499be40c6f6af94a431d739704a90b80110ae\"" Jul 15 11:29:31.980804 env[1207]: time="2025-07-15T11:29:31.980756548Z" level=info msg="CreateContainer within sandbox \"bc2d02ad400b406766ab3e4d43ab151fc777ec2413f18fbdb51b26ebfdbaaa8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60e0fb85d2b39f143fb3e428633d3de75d793d5d4955b3ab66d807af4dc07f1b\"" Jul 15 11:29:31.981294 env[1207]: time="2025-07-15T11:29:31.981275338Z" level=info msg="StartContainer for \"60e0fb85d2b39f143fb3e428633d3de75d793d5d4955b3ab66d807af4dc07f1b\"" Jul 15 11:29:31.994522 systemd[1]: Started cri-containerd-c875316d2632092c50d4a960c72499be40c6f6af94a431d739704a90b80110ae.scope. Jul 15 11:29:32.000377 systemd[1]: Started cri-containerd-60e0fb85d2b39f143fb3e428633d3de75d793d5d4955b3ab66d807af4dc07f1b.scope. Jul 15 11:29:32.014434 env[1207]: time="2025-07-15T11:29:32.014371607Z" level=info msg="StartContainer for \"d743ca8d4f81c7b349a6b5c06fc4c4d341f05ee3154f0cb34f841a51150f5df6\" returns successfully" Jul 15 11:29:32.034844 env[1207]: time="2025-07-15T11:29:32.034711402Z" level=info msg="StartContainer for \"c875316d2632092c50d4a960c72499be40c6f6af94a431d739704a90b80110ae\" returns successfully" Jul 15 11:29:32.047706 env[1207]: time="2025-07-15T11:29:32.047634804Z" level=info msg="StartContainer for \"60e0fb85d2b39f143fb3e428633d3de75d793d5d4955b3ab66d807af4dc07f1b\" returns successfully" Jul 15 11:29:32.205104 kubelet[1568]: E0715 11:29:32.205073 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:32.205471 kubelet[1568]: E0715 11:29:32.205453 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:32.207689 kubelet[1568]: E0715 11:29:32.207662 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:32.208022 kubelet[1568]: E0715 11:29:32.208008 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:32.210317 kubelet[1568]: E0715 11:29:32.210288 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:32.210676 kubelet[1568]: E0715 11:29:32.210658 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:33.212066 kubelet[1568]: E0715 11:29:33.212027 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:33.212442 kubelet[1568]: E0715 11:29:33.212143 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:33.212442 kubelet[1568]: E0715 11:29:33.212333 1568 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 11:29:33.212442 kubelet[1568]: E0715 11:29:33.212396 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:33.314916 kubelet[1568]: E0715 11:29:33.314869 1568 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:29:33.464238 kubelet[1568]: I0715 11:29:33.464111 1568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:29:33.472628 kubelet[1568]: I0715 11:29:33.472591 1568 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:29:33.472628 kubelet[1568]: E0715 11:29:33.472626 1568 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 11:29:33.481987 kubelet[1568]: E0715 11:29:33.481939 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.583115 kubelet[1568]: E0715 11:29:33.583073 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.684130 kubelet[1568]: E0715 11:29:33.684074 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.784829 kubelet[1568]: E0715 11:29:33.784704 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.885327 kubelet[1568]: E0715 11:29:33.885273 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:33.986278 kubelet[1568]: E0715 11:29:33.986224 1568 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:34.082959 kubelet[1568]: I0715 11:29:34.082835 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:34.088367 kubelet[1568]: E0715 11:29:34.088333 1568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:34.088525 kubelet[1568]: I0715 11:29:34.088357 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:34.089827 kubelet[1568]: E0715 11:29:34.089794 1568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:34.089827 kubelet[1568]: I0715 11:29:34.089812 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:34.091028 kubelet[1568]: E0715 11:29:34.090999 1568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:34.147767 kubelet[1568]: I0715 11:29:34.147725 1568 apiserver.go:52] "Watching apiserver" Jul 15 11:29:34.183279 kubelet[1568]: I0715 11:29:34.183232 1568 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:29:34.212518 kubelet[1568]: I0715 11:29:34.212477 1568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:34.214285 kubelet[1568]: E0715 11:29:34.214244 1568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:34.214444 kubelet[1568]: E0715 11:29:34.214421 1568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:35.508821 systemd[1]: Reloading. Jul 15 11:29:35.567535 /usr/lib/systemd/system-generators/torcx-generator[1880]: time="2025-07-15T11:29:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:29:35.567558 /usr/lib/systemd/system-generators/torcx-generator[1880]: time="2025-07-15T11:29:35Z" level=info msg="torcx already run" Jul 15 11:29:35.623081 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:29:35.623097 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:29:35.639953 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:29:35.727064 kubelet[1568]: I0715 11:29:35.727031 1568 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:29:35.727205 systemd[1]: Stopping kubelet.service... Jul 15 11:29:35.750567 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:29:35.750785 systemd[1]: Stopped kubelet.service. Jul 15 11:29:35.750835 systemd[1]: kubelet.service: Consumed 1.252s CPU time. Jul 15 11:29:35.752384 systemd[1]: Starting kubelet.service... Jul 15 11:29:35.838831 systemd[1]: Started kubelet.service. Jul 15 11:29:35.865277 kubelet[1925]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:35.865633 kubelet[1925]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 11:29:35.865633 kubelet[1925]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:29:35.865925 kubelet[1925]: I0715 11:29:35.865650 1925 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:29:35.871221 kubelet[1925]: I0715 11:29:35.871188 1925 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 11:29:35.871221 kubelet[1925]: I0715 11:29:35.871208 1925 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:29:35.871393 kubelet[1925]: I0715 11:29:35.871373 1925 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 11:29:35.872353 kubelet[1925]: I0715 11:29:35.872336 1925 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 11:29:35.874014 kubelet[1925]: I0715 11:29:35.873990 1925 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:29:35.876417 kubelet[1925]: E0715 11:29:35.876392 1925 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:29:35.876417 kubelet[1925]: I0715 11:29:35.876417 1925 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:29:35.879868 kubelet[1925]: I0715 11:29:35.879843 1925 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:29:35.880026 kubelet[1925]: I0715 11:29:35.879998 1925 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:29:35.880165 kubelet[1925]: I0715 11:29:35.880018 1925 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:29:35.880262 kubelet[1925]: I0715 11:29:35.880167 1925 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:29:35.880262 kubelet[1925]: I0715 11:29:35.880175 1925 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 11:29:35.880262 kubelet[1925]: I0715 11:29:35.880206 1925 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:35.880334 kubelet[1925]: I0715 11:29:35.880319 1925 kubelet.go:480] "Attempting to sync node with API server" Jul 15 11:29:35.880334 kubelet[1925]: I0715 11:29:35.880329 1925 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:29:35.880381 kubelet[1925]: I0715 11:29:35.880344 1925 kubelet.go:386] "Adding apiserver pod source" Jul 15 11:29:35.880381 kubelet[1925]: I0715 11:29:35.880355 1925 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:29:35.883585 kubelet[1925]: I0715 11:29:35.883562 1925 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:29:35.883974 kubelet[1925]: I0715 11:29:35.883957 1925 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 11:29:35.885566 kubelet[1925]: I0715 11:29:35.885554 1925 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 11:29:35.885646 kubelet[1925]: I0715 11:29:35.885586 1925 server.go:1289] "Started kubelet" Jul 15 11:29:35.887829 kubelet[1925]: I0715 11:29:35.886318 1925 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:29:35.887829 kubelet[1925]: I0715 11:29:35.886434 1925 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:29:35.887829 kubelet[1925]: I0715 11:29:35.886504 1925 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:29:35.887829 kubelet[1925]: I0715 11:29:35.886550 1925 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:29:35.887829 kubelet[1925]: I0715 11:29:35.887180 1925 server.go:317] "Adding debug handlers to kubelet server" Jul 15 11:29:35.891315 kubelet[1925]: I0715 11:29:35.891302 1925 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:29:35.892336 kubelet[1925]: E0715 11:29:35.892310 1925 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:29:35.892450 kubelet[1925]: I0715 11:29:35.892437 1925 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 11:29:35.892605 kubelet[1925]: I0715 11:29:35.892589 1925 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 11:29:35.892860 kubelet[1925]: I0715 11:29:35.892848 1925 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:29:35.893023 kubelet[1925]: I0715 11:29:35.892981 1925 factory.go:223] Registration of the systemd container factory successfully Jul 15 11:29:35.893082 kubelet[1925]: I0715 11:29:35.893059 1925 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:29:35.895384 kubelet[1925]: E0715 11:29:35.893546 1925 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:29:35.895384 kubelet[1925]: I0715 11:29:35.893712 1925 factory.go:223] Registration of the containerd container factory successfully Jul 15 11:29:35.908535 kubelet[1925]: I0715 11:29:35.908495 1925 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 11:29:35.909424 kubelet[1925]: I0715 11:29:35.909412 1925 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 11:29:35.909511 kubelet[1925]: I0715 11:29:35.909498 1925 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 11:29:35.909598 kubelet[1925]: I0715 11:29:35.909583 1925 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 11:29:35.909679 kubelet[1925]: I0715 11:29:35.909666 1925 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 11:29:35.909784 kubelet[1925]: E0715 11:29:35.909766 1925 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:29:35.923294 kubelet[1925]: I0715 11:29:35.923274 1925 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 11:29:35.923455 kubelet[1925]: I0715 11:29:35.923436 1925 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 11:29:35.923536 kubelet[1925]: I0715 11:29:35.923524 1925 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:29:35.923712 kubelet[1925]: I0715 11:29:35.923698 1925 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:29:35.923794 kubelet[1925]: I0715 11:29:35.923770 1925 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:29:35.923860 kubelet[1925]: I0715 11:29:35.923847 1925 policy_none.go:49] "None policy: Start" Jul 15 11:29:35.923930 kubelet[1925]: I0715 11:29:35.923917 1925 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 11:29:35.924004 kubelet[1925]: I0715 11:29:35.923992 1925 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:29:35.924152 kubelet[1925]: I0715 11:29:35.924140 1925 state_mem.go:75] "Updated machine memory state" Jul 15 11:29:35.927027 kubelet[1925]: E0715 11:29:35.927015 1925 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 11:29:35.927346 kubelet[1925]: I0715 11:29:35.927336 1925 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:29:35.927464 kubelet[1925]: I0715 11:29:35.927428 1925 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:29:35.927661 kubelet[1925]: I0715 11:29:35.927650 1925 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:29:35.928034 kubelet[1925]: E0715 11:29:35.928021 1925 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 11:29:36.010685 kubelet[1925]: I0715 11:29:36.010648 1925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:36.010827 kubelet[1925]: I0715 11:29:36.010702 1925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:36.010827 kubelet[1925]: I0715 11:29:36.010724 1925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:36.030998 kubelet[1925]: I0715 11:29:36.030958 1925 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 11:29:36.035424 kubelet[1925]: I0715 11:29:36.035397 1925 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 11:29:36.035478 kubelet[1925]: I0715 11:29:36.035453 1925 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 11:29:36.093389 kubelet[1925]: I0715 11:29:36.093236 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d700bfe3d9b53d903c613b93d056ecab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d700bfe3d9b53d903c613b93d056ecab\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:36.093389 kubelet[1925]: I0715 11:29:36.093281 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:36.093389 kubelet[1925]: I0715 11:29:36.093301 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:36.093389 kubelet[1925]: I0715 11:29:36.093353 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:36.093389 kubelet[1925]: I0715 11:29:36.093383 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:36.093750 kubelet[1925]: I0715 11:29:36.093400 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d700bfe3d9b53d903c613b93d056ecab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d700bfe3d9b53d903c613b93d056ecab\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:36.093750 kubelet[1925]: I0715 11:29:36.093412 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d700bfe3d9b53d903c613b93d056ecab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d700bfe3d9b53d903c613b93d056ecab\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:36.093750 kubelet[1925]: I0715 11:29:36.093439 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:29:36.093750 kubelet[1925]: I0715 11:29:36.093464 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:29:36.314447 kubelet[1925]: E0715 11:29:36.314417 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.316643 kubelet[1925]: E0715 11:29:36.316610 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.316744 kubelet[1925]: E0715 11:29:36.316688 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.505212 sudo[1964]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 11:29:36.505487 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 15 11:29:36.881681 kubelet[1925]: I0715 11:29:36.881582 1925 apiserver.go:52] "Watching apiserver" Jul 15 11:29:36.892850 kubelet[1925]: I0715 11:29:36.892828 1925 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 11:29:36.917720 kubelet[1925]: I0715 11:29:36.917688 1925 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:36.917808 kubelet[1925]: E0715 11:29:36.917723 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.919810 kubelet[1925]: E0715 11:29:36.918017 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.923575 kubelet[1925]: E0715 11:29:36.923273 1925 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:29:36.923575 kubelet[1925]: E0715 11:29:36.923392 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:36.938932 kubelet[1925]: I0715 11:29:36.938863 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.93885072 podStartE2EDuration="938.85072ms" podCreationTimestamp="2025-07-15 11:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:36.933265761 +0000 UTC m=+1.091674364" watchObservedRunningTime="2025-07-15 11:29:36.93885072 +0000 UTC m=+1.097259303" Jul 15 11:29:36.947015 kubelet[1925]: I0715 11:29:36.946963 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.94693679 podStartE2EDuration="946.93679ms" podCreationTimestamp="2025-07-15 11:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:36.938957184 +0000 UTC m=+1.097365767" watchObservedRunningTime="2025-07-15 11:29:36.94693679 +0000 UTC m=+1.105345373" Jul 15 11:29:36.953881 kubelet[1925]: I0715 11:29:36.953825 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.953813359 podStartE2EDuration="953.813359ms" podCreationTimestamp="2025-07-15 11:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:36.947216245 +0000 UTC m=+1.105624828" watchObservedRunningTime="2025-07-15 11:29:36.953813359 +0000 UTC m=+1.112221942" Jul 15 11:29:36.957449 sudo[1964]: pam_unix(sudo:session): session closed for user root Jul 15 11:29:37.919197 kubelet[1925]: E0715 11:29:37.919164 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:37.919563 kubelet[1925]: E0715 11:29:37.919543 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:38.301894 sudo[1303]: pam_unix(sudo:session): session closed for user root Jul 15 11:29:38.303038 sshd[1300]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:38.304709 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:40040.service: Deactivated successfully. Jul 15 11:29:38.305298 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:29:38.305403 systemd[1]: session-5.scope: Consumed 5.508s CPU time. Jul 15 11:29:38.305699 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:29:38.306259 systemd-logind[1191]: Removed session 5. Jul 15 11:29:38.922181 kubelet[1925]: E0715 11:29:38.922124 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:42.254322 kubelet[1925]: I0715 11:29:42.254292 1925 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:29:42.254672 env[1207]: time="2025-07-15T11:29:42.254543580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:29:42.254896 kubelet[1925]: I0715 11:29:42.254694 1925 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:29:42.732325 kubelet[1925]: E0715 11:29:42.732302 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:42.926164 kubelet[1925]: E0715 11:29:42.926132 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.270988 systemd[1]: Created slice kubepods-besteffort-podeffa7e08_bad7_49a3_b9a6_4055cd985be1.slice. Jul 15 11:29:43.302977 systemd[1]: Created slice kubepods-besteffort-pod6e35e84d_940f_4a01_8cb4_f4c49dcd67bd.slice. Jul 15 11:29:43.313478 systemd[1]: Created slice kubepods-burstable-pod6a4da12c_43ff_4174_8638_aebbbd591384.slice. Jul 15 11:29:43.342925 kubelet[1925]: I0715 11:29:43.342861 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e35e84d-940f-4a01-8cb4-f4c49dcd67bd-kube-proxy\") pod \"kube-proxy-pdxvt\" (UID: \"6e35e84d-940f-4a01-8cb4-f4c49dcd67bd\") " pod="kube-system/kube-proxy-pdxvt" Jul 15 11:29:43.342925 kubelet[1925]: I0715 11:29:43.342917 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e35e84d-940f-4a01-8cb4-f4c49dcd67bd-xtables-lock\") pod \"kube-proxy-pdxvt\" (UID: \"6e35e84d-940f-4a01-8cb4-f4c49dcd67bd\") " pod="kube-system/kube-proxy-pdxvt" Jul 15 11:29:43.342925 kubelet[1925]: I0715 11:29:43.342934 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-run\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343380 kubelet[1925]: I0715 11:29:43.342951 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-hostproc\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343380 kubelet[1925]: I0715 11:29:43.342977 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cni-path\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343380 kubelet[1925]: I0715 11:29:43.342991 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-lib-modules\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343380 kubelet[1925]: I0715 11:29:43.343006 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-cgroup\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343380 kubelet[1925]: I0715 11:29:43.343022 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e35e84d-940f-4a01-8cb4-f4c49dcd67bd-lib-modules\") pod \"kube-proxy-pdxvt\" (UID: \"6e35e84d-940f-4a01-8cb4-f4c49dcd67bd\") " pod="kube-system/kube-proxy-pdxvt" Jul 15 11:29:43.343380 kubelet[1925]: I0715 11:29:43.343052 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkr2\" (UniqueName: \"kubernetes.io/projected/6e35e84d-940f-4a01-8cb4-f4c49dcd67bd-kube-api-access-qjkr2\") pod \"kube-proxy-pdxvt\" (UID: \"6e35e84d-940f-4a01-8cb4-f4c49dcd67bd\") " pod="kube-system/kube-proxy-pdxvt" Jul 15 11:29:43.343529 kubelet[1925]: I0715 11:29:43.343068 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a4da12c-43ff-4174-8638-aebbbd591384-clustermesh-secrets\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343529 kubelet[1925]: I0715 11:29:43.343080 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-config-path\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343529 kubelet[1925]: I0715 11:29:43.343096 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/effa7e08-bad7-49a3-b9a6-4055cd985be1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xwm6k\" (UID: \"effa7e08-bad7-49a3-b9a6-4055cd985be1\") " pod="kube-system/cilium-operator-6c4d7847fc-xwm6k" Jul 15 11:29:43.343529 kubelet[1925]: I0715 11:29:43.343122 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd48d\" (UniqueName: \"kubernetes.io/projected/effa7e08-bad7-49a3-b9a6-4055cd985be1-kube-api-access-nd48d\") pod \"cilium-operator-6c4d7847fc-xwm6k\" (UID: \"effa7e08-bad7-49a3-b9a6-4055cd985be1\") " pod="kube-system/cilium-operator-6c4d7847fc-xwm6k" Jul 15 11:29:43.343529 kubelet[1925]: I0715 11:29:43.343137 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-bpf-maps\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343658 kubelet[1925]: I0715 11:29:43.343149 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-xtables-lock\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343658 kubelet[1925]: I0715 11:29:43.343163 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-net\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343658 kubelet[1925]: I0715 11:29:43.343177 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-kernel\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343658 kubelet[1925]: I0715 11:29:43.343204 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-hubble-tls\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343658 kubelet[1925]: I0715 11:29:43.343222 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-etc-cni-netd\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.343778 kubelet[1925]: I0715 11:29:43.343239 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccf2d\" (UniqueName: \"kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-kube-api-access-ccf2d\") pod \"cilium-7md5m\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " pod="kube-system/cilium-7md5m" Jul 15 11:29:43.444214 kubelet[1925]: I0715 11:29:43.444168 1925 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:29:43.583659 kubelet[1925]: E0715 11:29:43.582813 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.583955 env[1207]: time="2025-07-15T11:29:43.583891881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xwm6k,Uid:effa7e08-bad7-49a3-b9a6-4055cd985be1,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:43.600148 env[1207]: time="2025-07-15T11:29:43.600080505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:43.600148 env[1207]: time="2025-07-15T11:29:43.600124799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:43.600148 env[1207]: time="2025-07-15T11:29:43.600138354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:43.600399 env[1207]: time="2025-07-15T11:29:43.600295923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd pid=2024 runtime=io.containerd.runc.v2 Jul 15 11:29:43.606642 kubelet[1925]: E0715 11:29:43.606599 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.607245 env[1207]: time="2025-07-15T11:29:43.607187596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdxvt,Uid:6e35e84d-940f-4a01-8cb4-f4c49dcd67bd,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:43.612020 systemd[1]: Started cri-containerd-e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd.scope. Jul 15 11:29:43.616513 kubelet[1925]: E0715 11:29:43.616477 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.618738 env[1207]: time="2025-07-15T11:29:43.618697266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7md5m,Uid:6a4da12c-43ff-4174-8638-aebbbd591384,Namespace:kube-system,Attempt:0,}" Jul 15 11:29:43.627698 env[1207]: time="2025-07-15T11:29:43.627622711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:43.627841 env[1207]: time="2025-07-15T11:29:43.627717060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:43.627841 env[1207]: time="2025-07-15T11:29:43.627738321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:43.628121 env[1207]: time="2025-07-15T11:29:43.628087684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/acdc3299240aad5b1a5c3642919777a3c370c95da2fa4429fce4a2e982ef26ce pid=2056 runtime=io.containerd.runc.v2 Jul 15 11:29:43.641404 env[1207]: time="2025-07-15T11:29:43.641287122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:29:43.641562 env[1207]: time="2025-07-15T11:29:43.641413763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:29:43.641562 env[1207]: time="2025-07-15T11:29:43.641443319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:29:43.641658 env[1207]: time="2025-07-15T11:29:43.641588304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a pid=2080 runtime=io.containerd.runc.v2 Jul 15 11:29:43.643325 systemd[1]: Started cri-containerd-acdc3299240aad5b1a5c3642919777a3c370c95da2fa4429fce4a2e982ef26ce.scope. Jul 15 11:29:43.658242 systemd[1]: Started cri-containerd-fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a.scope. Jul 15 11:29:43.665704 env[1207]: time="2025-07-15T11:29:43.665311521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xwm6k,Uid:effa7e08-bad7-49a3-b9a6-4055cd985be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd\"" Jul 15 11:29:43.666731 kubelet[1925]: E0715 11:29:43.666704 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.667576 env[1207]: time="2025-07-15T11:29:43.667534552Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 11:29:43.674471 env[1207]: time="2025-07-15T11:29:43.674411137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pdxvt,Uid:6e35e84d-940f-4a01-8cb4-f4c49dcd67bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"acdc3299240aad5b1a5c3642919777a3c370c95da2fa4429fce4a2e982ef26ce\"" Jul 15 11:29:43.676916 kubelet[1925]: E0715 11:29:43.676879 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.682408 env[1207]: time="2025-07-15T11:29:43.682227655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7md5m,Uid:6a4da12c-43ff-4174-8638-aebbbd591384,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\"" Jul 15 11:29:43.682672 env[1207]: time="2025-07-15T11:29:43.682621403Z" level=info msg="CreateContainer within sandbox \"acdc3299240aad5b1a5c3642919777a3c370c95da2fa4429fce4a2e982ef26ce\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:29:43.684133 kubelet[1925]: E0715 11:29:43.684100 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.698590 env[1207]: time="2025-07-15T11:29:43.698549361Z" level=info msg="CreateContainer within sandbox \"acdc3299240aad5b1a5c3642919777a3c370c95da2fa4429fce4a2e982ef26ce\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b733694de076573ac6f758ac2f964ba7c0c23888da29d14dbd601663bfd8e561\"" Jul 15 11:29:43.699266 env[1207]: time="2025-07-15T11:29:43.699222178Z" level=info msg="StartContainer for \"b733694de076573ac6f758ac2f964ba7c0c23888da29d14dbd601663bfd8e561\"" Jul 15 11:29:43.712542 systemd[1]: Started cri-containerd-b733694de076573ac6f758ac2f964ba7c0c23888da29d14dbd601663bfd8e561.scope. Jul 15 11:29:43.734149 env[1207]: time="2025-07-15T11:29:43.734105853Z" level=info msg="StartContainer for \"b733694de076573ac6f758ac2f964ba7c0c23888da29d14dbd601663bfd8e561\" returns successfully" Jul 15 11:29:43.768898 kubelet[1925]: E0715 11:29:43.768860 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.818149 update_engine[1196]: I0715 11:29:43.818116 1196 update_attempter.cc:509] Updating boot flags... Jul 15 11:29:43.929617 kubelet[1925]: E0715 11:29:43.928834 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.930692 kubelet[1925]: E0715 11:29:43.930672 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.930996 kubelet[1925]: E0715 11:29:43.930970 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:43.947206 kubelet[1925]: I0715 11:29:43.947163 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pdxvt" podStartSLOduration=0.947148221 podStartE2EDuration="947.148221ms" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:29:43.938900413 +0000 UTC m=+8.097308996" watchObservedRunningTime="2025-07-15 11:29:43.947148221 +0000 UTC m=+8.105556804" Jul 15 11:29:44.745337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127997728.mount: Deactivated successfully. Jul 15 11:29:45.550427 kubelet[1925]: E0715 11:29:45.550397 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:45.591155 env[1207]: time="2025-07-15T11:29:45.591095067Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.592882 env[1207]: time="2025-07-15T11:29:45.592835205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.594493 env[1207]: time="2025-07-15T11:29:45.594457330Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:45.594893 env[1207]: time="2025-07-15T11:29:45.594856215Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 11:29:45.596129 env[1207]: time="2025-07-15T11:29:45.595872953Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 11:29:45.599469 env[1207]: time="2025-07-15T11:29:45.599442880Z" level=info msg="CreateContainer within sandbox \"e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 11:29:45.612111 env[1207]: time="2025-07-15T11:29:45.612062231Z" level=info msg="CreateContainer within sandbox \"e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\"" Jul 15 11:29:45.612576 env[1207]: time="2025-07-15T11:29:45.612540918Z" level=info msg="StartContainer for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\"" Jul 15 11:29:45.631366 systemd[1]: Started cri-containerd-84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967.scope. Jul 15 11:29:45.656514 env[1207]: time="2025-07-15T11:29:45.656446474Z" level=info msg="StartContainer for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" returns successfully" Jul 15 11:29:45.934716 kubelet[1925]: E0715 11:29:45.934077 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:45.934716 kubelet[1925]: E0715 11:29:45.934074 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:46.478071 kubelet[1925]: I0715 11:29:46.478004 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xwm6k" podStartSLOduration=1.549469996 podStartE2EDuration="3.47798318s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="2025-07-15 11:29:43.667211929 +0000 UTC m=+7.825620502" lastFinishedPulling="2025-07-15 11:29:45.595725103 +0000 UTC m=+9.754133686" observedRunningTime="2025-07-15 11:29:46.444748605 +0000 UTC m=+10.603157188" watchObservedRunningTime="2025-07-15 11:29:46.47798318 +0000 UTC m=+10.636391763" Jul 15 11:29:46.609130 systemd[1]: run-containerd-runc-k8s.io-84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967-runc.4RTqmn.mount: Deactivated successfully. Jul 15 11:29:46.935952 kubelet[1925]: E0715 11:29:46.935840 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:46.935952 kubelet[1925]: E0715 11:29:46.935852 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:53.083788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946235580.mount: Deactivated successfully. Jul 15 11:29:58.051513 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:49328.service. Jul 15 11:29:58.091434 sshd[2366]: Accepted publickey for core from 10.0.0.1 port 49328 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:29:58.092526 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:29:58.647537 systemd-logind[1191]: New session 6 of user core. Jul 15 11:29:58.648650 systemd[1]: Started session-6.scope. Jul 15 11:29:58.660531 env[1207]: time="2025-07-15T11:29:58.660479830Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:58.662556 env[1207]: time="2025-07-15T11:29:58.662512029Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:58.664495 env[1207]: time="2025-07-15T11:29:58.664462583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:29:58.664897 env[1207]: time="2025-07-15T11:29:58.664873928Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 11:29:58.677509 env[1207]: time="2025-07-15T11:29:58.677474213Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:29:58.688980 env[1207]: time="2025-07-15T11:29:58.688942044Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\"" Jul 15 11:29:58.689355 env[1207]: time="2025-07-15T11:29:58.689329003Z" level=info msg="StartContainer for \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\"" Jul 15 11:29:58.710052 systemd[1]: Started cri-containerd-10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f.scope. Jul 15 11:29:58.742162 env[1207]: time="2025-07-15T11:29:58.742112390Z" level=info msg="StartContainer for \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\" returns successfully" Jul 15 11:29:58.754320 systemd[1]: cri-containerd-10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f.scope: Deactivated successfully. Jul 15 11:29:58.974765 kubelet[1925]: E0715 11:29:58.974226 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:58.981757 sshd[2366]: pam_unix(sshd:session): session closed for user core Jul 15 11:29:58.984926 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:49328.service: Deactivated successfully. Jul 15 11:29:58.985691 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:29:58.986317 systemd-logind[1191]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:29:58.987370 systemd-logind[1191]: Removed session 6. Jul 15 11:29:59.046189 env[1207]: time="2025-07-15T11:29:59.046136399Z" level=info msg="shim disconnected" id=10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f Jul 15 11:29:59.046189 env[1207]: time="2025-07-15T11:29:59.046179861Z" level=warning msg="cleaning up after shim disconnected" id=10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f namespace=k8s.io Jul 15 11:29:59.046189 env[1207]: time="2025-07-15T11:29:59.046189870Z" level=info msg="cleaning up dead shim" Jul 15 11:29:59.052679 env[1207]: time="2025-07-15T11:29:59.052621732Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:29:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2432 runtime=io.containerd.runc.v2\n" Jul 15 11:29:59.685472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f-rootfs.mount: Deactivated successfully. Jul 15 11:29:59.958543 kubelet[1925]: E0715 11:29:59.958460 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:29:59.963038 env[1207]: time="2025-07-15T11:29:59.962998426Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:29:59.975803 env[1207]: time="2025-07-15T11:29:59.975744178Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\"" Jul 15 11:29:59.976303 env[1207]: time="2025-07-15T11:29:59.976279315Z" level=info msg="StartContainer for \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\"" Jul 15 11:29:59.993278 systemd[1]: Started cri-containerd-14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b.scope. Jul 15 11:30:00.017234 env[1207]: time="2025-07-15T11:30:00.017186018Z" level=info msg="StartContainer for \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\" returns successfully" Jul 15 11:30:00.025971 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:30:00.026271 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:30:00.026454 systemd[1]: Stopping systemd-sysctl.service... Jul 15 11:30:00.027880 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:30:00.028987 systemd[1]: cri-containerd-14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b.scope: Deactivated successfully. Jul 15 11:30:00.036246 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:30:00.050185 env[1207]: time="2025-07-15T11:30:00.050134106Z" level=info msg="shim disconnected" id=14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b Jul 15 11:30:00.050185 env[1207]: time="2025-07-15T11:30:00.050179041Z" level=warning msg="cleaning up after shim disconnected" id=14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b namespace=k8s.io Jul 15 11:30:00.050185 env[1207]: time="2025-07-15T11:30:00.050187346Z" level=info msg="cleaning up dead shim" Jul 15 11:30:00.058774 env[1207]: time="2025-07-15T11:30:00.058734397Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2495 runtime=io.containerd.runc.v2\n" Jul 15 11:30:00.686082 systemd[1]: run-containerd-runc-k8s.io-14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b-runc.kdcLdL.mount: Deactivated successfully. Jul 15 11:30:00.686195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b-rootfs.mount: Deactivated successfully. Jul 15 11:30:00.960813 kubelet[1925]: E0715 11:30:00.960793 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:00.967741 env[1207]: time="2025-07-15T11:30:00.967679406Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:30:00.984663 env[1207]: time="2025-07-15T11:30:00.984611611Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\"" Jul 15 11:30:00.985121 env[1207]: time="2025-07-15T11:30:00.985088890Z" level=info msg="StartContainer for \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\"" Jul 15 11:30:01.001490 systemd[1]: Started cri-containerd-0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137.scope. Jul 15 11:30:01.024077 env[1207]: time="2025-07-15T11:30:01.024035689Z" level=info msg="StartContainer for \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\" returns successfully" Jul 15 11:30:01.026056 systemd[1]: cri-containerd-0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137.scope: Deactivated successfully. Jul 15 11:30:01.044864 env[1207]: time="2025-07-15T11:30:01.044818524Z" level=info msg="shim disconnected" id=0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137 Jul 15 11:30:01.044864 env[1207]: time="2025-07-15T11:30:01.044861565Z" level=warning msg="cleaning up after shim disconnected" id=0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137 namespace=k8s.io Jul 15 11:30:01.045062 env[1207]: time="2025-07-15T11:30:01.044870823Z" level=info msg="cleaning up dead shim" Jul 15 11:30:01.050494 env[1207]: time="2025-07-15T11:30:01.050456164Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2552 runtime=io.containerd.runc.v2\n" Jul 15 11:30:01.686028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137-rootfs.mount: Deactivated successfully. Jul 15 11:30:01.964494 kubelet[1925]: E0715 11:30:01.964446 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:02.053066 env[1207]: time="2025-07-15T11:30:02.053015277Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:30:02.348431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042469632.mount: Deactivated successfully. Jul 15 11:30:02.443147 env[1207]: time="2025-07-15T11:30:02.443077478Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\"" Jul 15 11:30:02.443809 env[1207]: time="2025-07-15T11:30:02.443764671Z" level=info msg="StartContainer for \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\"" Jul 15 11:30:02.460075 systemd[1]: Started cri-containerd-51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd.scope. Jul 15 11:30:02.481681 systemd[1]: cri-containerd-51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd.scope: Deactivated successfully. Jul 15 11:30:02.483445 env[1207]: time="2025-07-15T11:30:02.483405730Z" level=info msg="StartContainer for \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\" returns successfully" Jul 15 11:30:02.503808 env[1207]: time="2025-07-15T11:30:02.503760570Z" level=info msg="shim disconnected" id=51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd Jul 15 11:30:02.503983 env[1207]: time="2025-07-15T11:30:02.503811436Z" level=warning msg="cleaning up after shim disconnected" id=51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd namespace=k8s.io Jul 15 11:30:02.503983 env[1207]: time="2025-07-15T11:30:02.503821024Z" level=info msg="cleaning up dead shim" Jul 15 11:30:02.511232 env[1207]: time="2025-07-15T11:30:02.511177326Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" Jul 15 11:30:02.685685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd-rootfs.mount: Deactivated successfully. Jul 15 11:30:02.967873 kubelet[1925]: E0715 11:30:02.967851 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:02.972618 env[1207]: time="2025-07-15T11:30:02.972577076Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:30:02.988437 env[1207]: time="2025-07-15T11:30:02.988392075Z" level=info msg="CreateContainer within sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\"" Jul 15 11:30:02.988992 env[1207]: time="2025-07-15T11:30:02.988805643Z" level=info msg="StartContainer for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\"" Jul 15 11:30:03.003354 systemd[1]: Started cri-containerd-9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc.scope. Jul 15 11:30:03.032679 env[1207]: time="2025-07-15T11:30:03.032634338Z" level=info msg="StartContainer for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" returns successfully" Jul 15 11:30:03.082341 kubelet[1925]: I0715 11:30:03.082315 1925 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 11:30:03.144975 systemd[1]: Created slice kubepods-burstable-pod0291623d_ff44_432a_8470_ad5e6a56d91c.slice. Jul 15 11:30:03.151787 systemd[1]: Created slice kubepods-burstable-podf32f0e29_c2c5_4ca8_be57_86f50eb949bc.slice. Jul 15 11:30:03.178001 kubelet[1925]: I0715 11:30:03.177978 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0291623d-ff44-432a-8470-ad5e6a56d91c-config-volume\") pod \"coredns-674b8bbfcf-sw6mv\" (UID: \"0291623d-ff44-432a-8470-ad5e6a56d91c\") " pod="kube-system/coredns-674b8bbfcf-sw6mv" Jul 15 11:30:03.178170 kubelet[1925]: I0715 11:30:03.178155 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f32f0e29-c2c5-4ca8-be57-86f50eb949bc-config-volume\") pod \"coredns-674b8bbfcf-frwt6\" (UID: \"f32f0e29-c2c5-4ca8-be57-86f50eb949bc\") " pod="kube-system/coredns-674b8bbfcf-frwt6" Jul 15 11:30:03.178315 kubelet[1925]: I0715 11:30:03.178301 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-444qd\" (UniqueName: \"kubernetes.io/projected/0291623d-ff44-432a-8470-ad5e6a56d91c-kube-api-access-444qd\") pod \"coredns-674b8bbfcf-sw6mv\" (UID: \"0291623d-ff44-432a-8470-ad5e6a56d91c\") " pod="kube-system/coredns-674b8bbfcf-sw6mv" Jul 15 11:30:03.178451 kubelet[1925]: I0715 11:30:03.178438 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qwn\" (UniqueName: \"kubernetes.io/projected/f32f0e29-c2c5-4ca8-be57-86f50eb949bc-kube-api-access-j5qwn\") pod \"coredns-674b8bbfcf-frwt6\" (UID: \"f32f0e29-c2c5-4ca8-be57-86f50eb949bc\") " pod="kube-system/coredns-674b8bbfcf-frwt6" Jul 15 11:30:03.448514 kubelet[1925]: E0715 11:30:03.448467 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:03.449350 env[1207]: time="2025-07-15T11:30:03.449311055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sw6mv,Uid:0291623d-ff44-432a-8470-ad5e6a56d91c,Namespace:kube-system,Attempt:0,}" Jul 15 11:30:03.455760 kubelet[1925]: E0715 11:30:03.455726 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:03.456108 env[1207]: time="2025-07-15T11:30:03.456066914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frwt6,Uid:f32f0e29-c2c5-4ca8-be57-86f50eb949bc,Namespace:kube-system,Attempt:0,}" Jul 15 11:30:03.689030 systemd[1]: run-containerd-runc-k8s.io-9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc-runc.NOtvbp.mount: Deactivated successfully. Jul 15 11:30:03.972020 kubelet[1925]: E0715 11:30:03.971987 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:03.986995 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:49340.service. Jul 15 11:30:04.026969 sshd[2791]: Accepted publickey for core from 10.0.0.1 port 49340 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:04.028046 sshd[2791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:04.031408 systemd-logind[1191]: New session 7 of user core. Jul 15 11:30:04.032197 systemd[1]: Started session-7.scope. Jul 15 11:30:04.168539 sshd[2791]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:04.171130 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:49340.service: Deactivated successfully. Jul 15 11:30:04.171917 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:30:04.172619 systemd-logind[1191]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:30:04.173227 systemd-logind[1191]: Removed session 7. Jul 15 11:30:04.973427 kubelet[1925]: E0715 11:30:04.973396 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:05.045509 systemd-networkd[1027]: cilium_host: Link UP Jul 15 11:30:05.046033 systemd-networkd[1027]: cilium_net: Link UP Jul 15 11:30:05.047291 systemd-networkd[1027]: cilium_net: Gained carrier Jul 15 11:30:05.048388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 15 11:30:05.048454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 15 11:30:05.048541 systemd-networkd[1027]: cilium_host: Gained carrier Jul 15 11:30:05.115353 systemd-networkd[1027]: cilium_vxlan: Link UP Jul 15 11:30:05.115362 systemd-networkd[1027]: cilium_vxlan: Gained carrier Jul 15 11:30:05.303281 kernel: NET: Registered PF_ALG protocol family Jul 15 11:30:05.475411 systemd-networkd[1027]: cilium_host: Gained IPv6LL Jul 15 11:30:05.747577 systemd-networkd[1027]: cilium_net: Gained IPv6LL Jul 15 11:30:05.827692 systemd-networkd[1027]: lxc_health: Link UP Jul 15 11:30:05.833119 systemd-networkd[1027]: lxc_health: Gained carrier Jul 15 11:30:05.833295 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:30:05.974599 kubelet[1925]: E0715 11:30:05.974562 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:05.993651 systemd-networkd[1027]: lxcc2fad49120d6: Link UP Jul 15 11:30:06.002288 kernel: eth0: renamed from tmp0a513 Jul 15 11:30:06.015635 kernel: eth0: renamed from tmp3f5ed Jul 15 11:30:06.020305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc166db09fa2a6: link becomes ready Jul 15 11:30:06.020820 systemd-networkd[1027]: lxc166db09fa2a6: Link UP Jul 15 11:30:06.021243 systemd-networkd[1027]: lxc166db09fa2a6: Gained carrier Jul 15 11:30:06.023817 systemd-networkd[1027]: lxcc2fad49120d6: Gained carrier Jul 15 11:30:06.024384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc2fad49120d6: link becomes ready Jul 15 11:30:06.771443 systemd-networkd[1027]: cilium_vxlan: Gained IPv6LL Jul 15 11:30:06.976218 kubelet[1925]: E0715 11:30:06.976190 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:07.027445 systemd-networkd[1027]: lxc_health: Gained IPv6LL Jul 15 11:30:07.795425 systemd-networkd[1027]: lxc166db09fa2a6: Gained IPv6LL Jul 15 11:30:07.854813 kubelet[1925]: I0715 11:30:07.854759 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7md5m" podStartSLOduration=9.872443454999999 podStartE2EDuration="24.854742687s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="2025-07-15 11:29:43.684742229 +0000 UTC m=+7.843150812" lastFinishedPulling="2025-07-15 11:29:58.667041461 +0000 UTC m=+22.825450044" observedRunningTime="2025-07-15 11:30:03.985060801 +0000 UTC m=+28.143469394" watchObservedRunningTime="2025-07-15 11:30:07.854742687 +0000 UTC m=+32.013151270" Jul 15 11:30:07.923437 systemd-networkd[1027]: lxcc2fad49120d6: Gained IPv6LL Jul 15 11:30:07.978109 kubelet[1925]: E0715 11:30:07.978069 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:08.979696 kubelet[1925]: E0715 11:30:08.979669 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:09.172163 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:40238.service. Jul 15 11:30:09.185871 env[1207]: time="2025-07-15T11:30:09.185791189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:09.185871 env[1207]: time="2025-07-15T11:30:09.185850099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:09.185871 env[1207]: time="2025-07-15T11:30:09.185861671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:09.186201 env[1207]: time="2025-07-15T11:30:09.186078699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f5ed67ef185a7ef0891625655a8d2746d9bdee42071596e53f7a6bd1804b22e pid=3200 runtime=io.containerd.runc.v2 Jul 15 11:30:09.199303 env[1207]: time="2025-07-15T11:30:09.196003207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:09.199303 env[1207]: time="2025-07-15T11:30:09.196063081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:09.199303 env[1207]: time="2025-07-15T11:30:09.196083138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:09.199303 env[1207]: time="2025-07-15T11:30:09.196197944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a513d63e164f8ce54003faa8fa0084a893432bdbaaf6ccb4bd41eb309bbf090 pid=3227 runtime=io.containerd.runc.v2 Jul 15 11:30:09.200965 systemd[1]: run-containerd-runc-k8s.io-3f5ed67ef185a7ef0891625655a8d2746d9bdee42071596e53f7a6bd1804b22e-runc.PKntmE.mount: Deactivated successfully. Jul 15 11:30:09.204802 systemd[1]: Started cri-containerd-3f5ed67ef185a7ef0891625655a8d2746d9bdee42071596e53f7a6bd1804b22e.scope. Jul 15 11:30:09.210589 systemd[1]: Started cri-containerd-0a513d63e164f8ce54003faa8fa0084a893432bdbaaf6ccb4bd41eb309bbf090.scope. Jul 15 11:30:09.216512 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:09.220214 sshd[3192]: Accepted publickey for core from 10.0.0.1 port 40238 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:09.221485 sshd[3192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:09.223674 systemd-resolved[1141]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:30:09.225497 systemd[1]: Started session-8.scope. Jul 15 11:30:09.225974 systemd-logind[1191]: New session 8 of user core. Jul 15 11:30:09.240549 env[1207]: time="2025-07-15T11:30:09.240454335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sw6mv,Uid:0291623d-ff44-432a-8470-ad5e6a56d91c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5ed67ef185a7ef0891625655a8d2746d9bdee42071596e53f7a6bd1804b22e\"" Jul 15 11:30:09.241448 kubelet[1925]: E0715 11:30:09.241425 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:09.248538 env[1207]: time="2025-07-15T11:30:09.248493241Z" level=info msg="CreateContainer within sandbox \"3f5ed67ef185a7ef0891625655a8d2746d9bdee42071596e53f7a6bd1804b22e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:30:09.252835 env[1207]: time="2025-07-15T11:30:09.252793742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frwt6,Uid:f32f0e29-c2c5-4ca8-be57-86f50eb949bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a513d63e164f8ce54003faa8fa0084a893432bdbaaf6ccb4bd41eb309bbf090\"" Jul 15 11:30:09.253456 kubelet[1925]: E0715 11:30:09.253431 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:09.261298 env[1207]: time="2025-07-15T11:30:09.261241907Z" level=info msg="CreateContainer within sandbox \"0a513d63e164f8ce54003faa8fa0084a893432bdbaaf6ccb4bd41eb309bbf090\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:30:09.267096 env[1207]: time="2025-07-15T11:30:09.267048428Z" level=info msg="CreateContainer within sandbox \"3f5ed67ef185a7ef0891625655a8d2746d9bdee42071596e53f7a6bd1804b22e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"235ebc907b5b3d251a7670a3a10c347ac923e17b9c1a5dc051f76b7419c56dc8\"" Jul 15 11:30:09.268321 env[1207]: time="2025-07-15T11:30:09.268282126Z" level=info msg="StartContainer for \"235ebc907b5b3d251a7670a3a10c347ac923e17b9c1a5dc051f76b7419c56dc8\"" Jul 15 11:30:09.277365 env[1207]: time="2025-07-15T11:30:09.277310853Z" level=info msg="CreateContainer within sandbox \"0a513d63e164f8ce54003faa8fa0084a893432bdbaaf6ccb4bd41eb309bbf090\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fea9ccb1e62476e3a840f1b9d206ca720163dc75dd423d5ae4c9764532902fb\"" Jul 15 11:30:09.279815 env[1207]: time="2025-07-15T11:30:09.279774392Z" level=info msg="StartContainer for \"4fea9ccb1e62476e3a840f1b9d206ca720163dc75dd423d5ae4c9764532902fb\"" Jul 15 11:30:09.282364 systemd[1]: Started cri-containerd-235ebc907b5b3d251a7670a3a10c347ac923e17b9c1a5dc051f76b7419c56dc8.scope. Jul 15 11:30:09.298187 systemd[1]: Started cri-containerd-4fea9ccb1e62476e3a840f1b9d206ca720163dc75dd423d5ae4c9764532902fb.scope. Jul 15 11:30:09.330183 env[1207]: time="2025-07-15T11:30:09.330146595Z" level=info msg="StartContainer for \"235ebc907b5b3d251a7670a3a10c347ac923e17b9c1a5dc051f76b7419c56dc8\" returns successfully" Jul 15 11:30:09.332106 env[1207]: time="2025-07-15T11:30:09.332072744Z" level=info msg="StartContainer for \"4fea9ccb1e62476e3a840f1b9d206ca720163dc75dd423d5ae4c9764532902fb\" returns successfully" Jul 15 11:30:09.415391 sshd[3192]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:09.417819 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:40238.service: Deactivated successfully. Jul 15 11:30:09.418463 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:30:09.419027 systemd-logind[1191]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:30:09.419783 systemd-logind[1191]: Removed session 8. Jul 15 11:30:09.982629 kubelet[1925]: E0715 11:30:09.982592 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:09.984222 kubelet[1925]: E0715 11:30:09.984177 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:10.275619 kubelet[1925]: I0715 11:30:10.275475 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sw6mv" podStartSLOduration=27.275459574 podStartE2EDuration="27.275459574s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:30:10.126472677 +0000 UTC m=+34.284881270" watchObservedRunningTime="2025-07-15 11:30:10.275459574 +0000 UTC m=+34.433868157" Jul 15 11:30:10.425112 kubelet[1925]: I0715 11:30:10.425037 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-frwt6" podStartSLOduration=27.425016512 podStartE2EDuration="27.425016512s" podCreationTimestamp="2025-07-15 11:29:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:30:10.424949467 +0000 UTC m=+34.583358050" watchObservedRunningTime="2025-07-15 11:30:10.425016512 +0000 UTC m=+34.583425096" Jul 15 11:30:10.985923 kubelet[1925]: E0715 11:30:10.985892 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:10.986296 kubelet[1925]: E0715 11:30:10.985950 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:11.987700 kubelet[1925]: E0715 11:30:11.987669 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:11.988059 kubelet[1925]: E0715 11:30:11.987818 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:14.419167 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:40240.service. Jul 15 11:30:14.459308 sshd[3383]: Accepted publickey for core from 10.0.0.1 port 40240 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:14.460367 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:14.463627 systemd-logind[1191]: New session 9 of user core. Jul 15 11:30:14.464550 systemd[1]: Started session-9.scope. Jul 15 11:30:14.569594 sshd[3383]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:14.571576 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:40240.service: Deactivated successfully. Jul 15 11:30:14.572269 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:30:14.572762 systemd-logind[1191]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:30:14.573379 systemd-logind[1191]: Removed session 9. Jul 15 11:30:19.573867 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:50828.service. Jul 15 11:30:19.611271 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 50828 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:19.612192 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:19.615237 systemd-logind[1191]: New session 10 of user core. Jul 15 11:30:19.615955 systemd[1]: Started session-10.scope. Jul 15 11:30:19.730482 sshd[3397]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:19.733609 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:50828.service: Deactivated successfully. Jul 15 11:30:19.734214 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:30:19.734706 systemd-logind[1191]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:30:19.735938 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:50830.service. Jul 15 11:30:19.736862 systemd-logind[1191]: Removed session 10. Jul 15 11:30:19.777596 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 50830 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:19.778834 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:19.782149 systemd-logind[1191]: New session 11 of user core. Jul 15 11:30:19.782962 systemd[1]: Started session-11.scope. Jul 15 11:30:19.926130 sshd[3411]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:19.937695 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:50842.service. Jul 15 11:30:19.938390 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:50830.service: Deactivated successfully. Jul 15 11:30:19.939224 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:30:19.940148 systemd-logind[1191]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:30:19.941048 systemd-logind[1191]: Removed session 11. Jul 15 11:30:19.976259 sshd[3421]: Accepted publickey for core from 10.0.0.1 port 50842 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:19.977313 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:19.980297 systemd-logind[1191]: New session 12 of user core. Jul 15 11:30:19.981082 systemd[1]: Started session-12.scope. Jul 15 11:30:20.086546 sshd[3421]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:20.088880 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:50842.service: Deactivated successfully. Jul 15 11:30:20.089570 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:30:20.090054 systemd-logind[1191]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:30:20.090700 systemd-logind[1191]: Removed session 12. Jul 15 11:30:25.090235 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:50844.service. Jul 15 11:30:25.130383 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 50844 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:25.131501 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:25.134704 systemd-logind[1191]: New session 13 of user core. Jul 15 11:30:25.135452 systemd[1]: Started session-13.scope. Jul 15 11:30:25.237495 sshd[3435]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:25.240338 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:50844.service: Deactivated successfully. Jul 15 11:30:25.241023 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:30:25.241597 systemd-logind[1191]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:30:25.242271 systemd-logind[1191]: Removed session 13. Jul 15 11:30:30.241317 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:39686.service. Jul 15 11:30:30.278202 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 39686 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:30.278986 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:30.281744 systemd-logind[1191]: New session 14 of user core. Jul 15 11:30:30.282482 systemd[1]: Started session-14.scope. Jul 15 11:30:30.378396 sshd[3448]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:30.380788 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:39686.service: Deactivated successfully. Jul 15 11:30:30.381234 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:30:30.382362 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:39692.service. Jul 15 11:30:30.382829 systemd-logind[1191]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:30:30.383549 systemd-logind[1191]: Removed session 14. Jul 15 11:30:30.420222 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 39692 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:30.421267 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:30.424024 systemd-logind[1191]: New session 15 of user core. Jul 15 11:30:30.424752 systemd[1]: Started session-15.scope. Jul 15 11:30:30.606357 sshd[3461]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:30.609091 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:39692.service: Deactivated successfully. Jul 15 11:30:30.609682 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:30:30.610428 systemd-logind[1191]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:30:30.611454 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:39708.service. Jul 15 11:30:30.612118 systemd-logind[1191]: Removed session 15. Jul 15 11:30:30.650271 sshd[3472]: Accepted publickey for core from 10.0.0.1 port 39708 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:30.651141 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:30.654367 systemd-logind[1191]: New session 16 of user core. Jul 15 11:30:30.655087 systemd[1]: Started session-16.scope. Jul 15 11:30:31.089521 sshd[3472]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:31.092481 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:39708.service: Deactivated successfully. Jul 15 11:30:31.092962 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:30:31.095360 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:39712.service. Jul 15 11:30:31.095990 systemd-logind[1191]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:30:31.097031 systemd-logind[1191]: Removed session 16. Jul 15 11:30:31.135548 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 39712 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:31.137743 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:31.141266 systemd-logind[1191]: New session 17 of user core. Jul 15 11:30:31.142021 systemd[1]: Started session-17.scope. Jul 15 11:30:31.388646 sshd[3490]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:31.393245 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:39722.service. Jul 15 11:30:31.395996 systemd-logind[1191]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:30:31.397051 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:39712.service: Deactivated successfully. Jul 15 11:30:31.397857 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:30:31.398674 systemd-logind[1191]: Removed session 17. Jul 15 11:30:31.433386 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 39722 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:31.434476 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:31.438008 systemd-logind[1191]: New session 18 of user core. Jul 15 11:30:31.438733 systemd[1]: Started session-18.scope. Jul 15 11:30:31.555214 sshd[3501]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:31.557417 systemd-logind[1191]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:30:31.558546 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:39722.service: Deactivated successfully. Jul 15 11:30:31.559120 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:30:31.560165 systemd-logind[1191]: Removed session 18. Jul 15 11:30:36.558801 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:39738.service. Jul 15 11:30:36.596180 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 39738 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:36.597076 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:36.600218 systemd-logind[1191]: New session 19 of user core. Jul 15 11:30:36.601191 systemd[1]: Started session-19.scope. Jul 15 11:30:36.699493 sshd[3518]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:36.701415 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:39738.service: Deactivated successfully. Jul 15 11:30:36.702039 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:30:36.702554 systemd-logind[1191]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:30:36.703211 systemd-logind[1191]: Removed session 19. Jul 15 11:30:41.704210 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:38684.service. Jul 15 11:30:41.741924 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 38684 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:41.742971 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:41.746029 systemd-logind[1191]: New session 20 of user core. Jul 15 11:30:41.746766 systemd[1]: Started session-20.scope. Jul 15 11:30:41.846924 sshd[3534]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:41.849138 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:38684.service: Deactivated successfully. Jul 15 11:30:41.849820 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:30:41.850378 systemd-logind[1191]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:30:41.850988 systemd-logind[1191]: Removed session 20. Jul 15 11:30:46.850547 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:38698.service. Jul 15 11:30:46.892499 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 38698 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:46.893879 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:46.897451 systemd-logind[1191]: New session 21 of user core. Jul 15 11:30:46.898418 systemd[1]: Started session-21.scope. Jul 15 11:30:47.002073 sshd[3550]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:47.004622 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:38698.service: Deactivated successfully. Jul 15 11:30:47.005078 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:30:47.005527 systemd-logind[1191]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:30:47.006309 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:38700.service. Jul 15 11:30:47.006886 systemd-logind[1191]: Removed session 21. Jul 15 11:30:47.044028 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 38700 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:47.045010 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:47.047600 systemd-logind[1191]: New session 22 of user core. Jul 15 11:30:47.048299 systemd[1]: Started session-22.scope. Jul 15 11:30:48.364766 env[1207]: time="2025-07-15T11:30:48.364726868Z" level=info msg="StopContainer for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" with timeout 30 (s)" Jul 15 11:30:48.365312 env[1207]: time="2025-07-15T11:30:48.365294407Z" level=info msg="Stop container \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" with signal terminated" Jul 15 11:30:48.378148 systemd[1]: cri-containerd-84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967.scope: Deactivated successfully. Jul 15 11:30:48.386132 env[1207]: time="2025-07-15T11:30:48.385935463Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:30:48.391960 env[1207]: time="2025-07-15T11:30:48.391926492Z" level=info msg="StopContainer for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" with timeout 2 (s)" Jul 15 11:30:48.392157 env[1207]: time="2025-07-15T11:30:48.392136725Z" level=info msg="Stop container \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" with signal terminated" Jul 15 11:30:48.395049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967-rootfs.mount: Deactivated successfully. Jul 15 11:30:48.397793 systemd-networkd[1027]: lxc_health: Link DOWN Jul 15 11:30:48.397803 systemd-networkd[1027]: lxc_health: Lost carrier Jul 15 11:30:48.402844 env[1207]: time="2025-07-15T11:30:48.402800452Z" level=info msg="shim disconnected" id=84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967 Jul 15 11:30:48.402956 env[1207]: time="2025-07-15T11:30:48.402854847Z" level=warning msg="cleaning up after shim disconnected" id=84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967 namespace=k8s.io Jul 15 11:30:48.402956 env[1207]: time="2025-07-15T11:30:48.402871729Z" level=info msg="cleaning up dead shim" Jul 15 11:30:48.409105 env[1207]: time="2025-07-15T11:30:48.409072150Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3619 runtime=io.containerd.runc.v2\n" Jul 15 11:30:48.413917 env[1207]: time="2025-07-15T11:30:48.411979920Z" level=info msg="StopContainer for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" returns successfully" Jul 15 11:30:48.413917 env[1207]: time="2025-07-15T11:30:48.412959531Z" level=info msg="StopPodSandbox for \"e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd\"" Jul 15 11:30:48.413917 env[1207]: time="2025-07-15T11:30:48.413020738Z" level=info msg="Container to stop \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:30:48.414831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd-shm.mount: Deactivated successfully. Jul 15 11:30:48.421406 systemd[1]: cri-containerd-e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd.scope: Deactivated successfully. Jul 15 11:30:48.431682 systemd[1]: cri-containerd-9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc.scope: Deactivated successfully. Jul 15 11:30:48.432032 systemd[1]: cri-containerd-9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc.scope: Consumed 5.750s CPU time. Jul 15 11:30:48.439424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd-rootfs.mount: Deactivated successfully. Jul 15 11:30:48.445815 env[1207]: time="2025-07-15T11:30:48.445656105Z" level=info msg="shim disconnected" id=e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd Jul 15 11:30:48.445965 env[1207]: time="2025-07-15T11:30:48.445818056Z" level=warning msg="cleaning up after shim disconnected" id=e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd namespace=k8s.io Jul 15 11:30:48.445965 env[1207]: time="2025-07-15T11:30:48.445829599Z" level=info msg="cleaning up dead shim" Jul 15 11:30:48.449518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc-rootfs.mount: Deactivated successfully. Jul 15 11:30:48.451554 env[1207]: time="2025-07-15T11:30:48.451509189Z" level=info msg="shim disconnected" id=9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc Jul 15 11:30:48.451768 env[1207]: time="2025-07-15T11:30:48.451744761Z" level=warning msg="cleaning up after shim disconnected" id=9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc namespace=k8s.io Jul 15 11:30:48.451853 env[1207]: time="2025-07-15T11:30:48.451832199Z" level=info msg="cleaning up dead shim" Jul 15 11:30:48.454831 env[1207]: time="2025-07-15T11:30:48.454776931Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3664 runtime=io.containerd.runc.v2\n" Jul 15 11:30:48.455202 env[1207]: time="2025-07-15T11:30:48.455164104Z" level=info msg="TearDown network for sandbox \"e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd\" successfully" Jul 15 11:30:48.455202 env[1207]: time="2025-07-15T11:30:48.455193359Z" level=info msg="StopPodSandbox for \"e044ba0dba0acef286e72975916c8ad9cddfe3500dc85a2ab2be3865813652dd\" returns successfully" Jul 15 11:30:48.459096 env[1207]: time="2025-07-15T11:30:48.459064550Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\n" Jul 15 11:30:48.461525 env[1207]: time="2025-07-15T11:30:48.461491357Z" level=info msg="StopContainer for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" returns successfully" Jul 15 11:30:48.461777 env[1207]: time="2025-07-15T11:30:48.461751998Z" level=info msg="StopPodSandbox for \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\"" Jul 15 11:30:48.461834 env[1207]: time="2025-07-15T11:30:48.461799369Z" level=info msg="Container to stop \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:30:48.461834 env[1207]: time="2025-07-15T11:30:48.461812174Z" level=info msg="Container to stop \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:30:48.461834 env[1207]: time="2025-07-15T11:30:48.461822122Z" level=info msg="Container to stop \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:30:48.461834 env[1207]: time="2025-07-15T11:30:48.461832101Z" level=info msg="Container to stop \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:30:48.462029 env[1207]: time="2025-07-15T11:30:48.461843463Z" level=info msg="Container to stop \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:30:48.467083 systemd[1]: cri-containerd-fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a.scope: Deactivated successfully. Jul 15 11:30:48.485741 env[1207]: time="2025-07-15T11:30:48.485601763Z" level=info msg="shim disconnected" id=fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a Jul 15 11:30:48.485903 env[1207]: time="2025-07-15T11:30:48.485741491Z" level=warning msg="cleaning up after shim disconnected" id=fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a namespace=k8s.io Jul 15 11:30:48.485903 env[1207]: time="2025-07-15T11:30:48.485756470Z" level=info msg="cleaning up dead shim" Jul 15 11:30:48.494221 env[1207]: time="2025-07-15T11:30:48.492425279Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3706 runtime=io.containerd.runc.v2\n" Jul 15 11:30:48.494221 env[1207]: time="2025-07-15T11:30:48.492687182Z" level=info msg="TearDown network for sandbox \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" successfully" Jul 15 11:30:48.494221 env[1207]: time="2025-07-15T11:30:48.492705758Z" level=info msg="StopPodSandbox for \"fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a\" returns successfully" Jul 15 11:30:48.527854 kubelet[1925]: I0715 11:30:48.527813 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-lib-modules\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.527854 kubelet[1925]: I0715 11:30:48.527850 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-net\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528200 kubelet[1925]: I0715 11:30:48.527867 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-run\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528200 kubelet[1925]: I0715 11:30:48.527894 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a4da12c-43ff-4174-8638-aebbbd591384-clustermesh-secrets\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528200 kubelet[1925]: I0715 11:30:48.527911 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-hostproc\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528200 kubelet[1925]: I0715 11:30:48.527923 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-xtables-lock\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528200 kubelet[1925]: I0715 11:30:48.527937 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cni-path\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528200 kubelet[1925]: I0715 11:30:48.527931 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.528409 kubelet[1925]: I0715 11:30:48.527953 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-config-path\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528409 kubelet[1925]: I0715 11:30:48.527948 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.528409 kubelet[1925]: I0715 11:30:48.527967 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-kernel\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528409 kubelet[1925]: I0715 11:30:48.527984 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.528409 kubelet[1925]: I0715 11:30:48.527991 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.528527 kubelet[1925]: I0715 11:30:48.528016 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-hubble-tls\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528527 kubelet[1925]: I0715 11:30:48.528038 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccf2d\" (UniqueName: \"kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-kube-api-access-ccf2d\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.528527 kubelet[1925]: I0715 11:30:48.528046 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cni-path" (OuterVolumeSpecName: "cni-path") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.528527 kubelet[1925]: I0715 11:30:48.528057 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd48d\" (UniqueName: \"kubernetes.io/projected/effa7e08-bad7-49a3-b9a6-4055cd985be1-kube-api-access-nd48d\") pod \"effa7e08-bad7-49a3-b9a6-4055cd985be1\" (UID: \"effa7e08-bad7-49a3-b9a6-4055cd985be1\") " Jul 15 11:30:48.528527 kubelet[1925]: I0715 11:30:48.528222 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-hostproc" (OuterVolumeSpecName: "hostproc") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.529503 kubelet[1925]: I0715 11:30:48.528372 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.529503 kubelet[1925]: I0715 11:30:48.528935 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-bpf-maps\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.529503 kubelet[1925]: I0715 11:30:48.528962 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-cgroup\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.529503 kubelet[1925]: I0715 11:30:48.528988 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/effa7e08-bad7-49a3-b9a6-4055cd985be1-cilium-config-path\") pod \"effa7e08-bad7-49a3-b9a6-4055cd985be1\" (UID: \"effa7e08-bad7-49a3-b9a6-4055cd985be1\") " Jul 15 11:30:48.529503 kubelet[1925]: I0715 11:30:48.529001 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-etc-cni-netd\") pod \"6a4da12c-43ff-4174-8638-aebbbd591384\" (UID: \"6a4da12c-43ff-4174-8638-aebbbd591384\") " Jul 15 11:30:48.529503 kubelet[1925]: I0715 11:30:48.529044 1925 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529052 1925 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529059 1925 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529065 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529072 1925 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529079 1925 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529085 1925 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.529645 kubelet[1925]: I0715 11:30:48.529102 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.530040 kubelet[1925]: I0715 11:30:48.529119 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.530040 kubelet[1925]: I0715 11:30:48.529140 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:48.530194 kubelet[1925]: I0715 11:30:48.530165 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:30:48.531638 kubelet[1925]: I0715 11:30:48.531125 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/effa7e08-bad7-49a3-b9a6-4055cd985be1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "effa7e08-bad7-49a3-b9a6-4055cd985be1" (UID: "effa7e08-bad7-49a3-b9a6-4055cd985be1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:30:48.532238 kubelet[1925]: I0715 11:30:48.532207 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/effa7e08-bad7-49a3-b9a6-4055cd985be1-kube-api-access-nd48d" (OuterVolumeSpecName: "kube-api-access-nd48d") pod "effa7e08-bad7-49a3-b9a6-4055cd985be1" (UID: "effa7e08-bad7-49a3-b9a6-4055cd985be1"). InnerVolumeSpecName "kube-api-access-nd48d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:30:48.532788 kubelet[1925]: I0715 11:30:48.532759 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a4da12c-43ff-4174-8638-aebbbd591384-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:30:48.533628 kubelet[1925]: I0715 11:30:48.533602 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:30:48.534364 kubelet[1925]: I0715 11:30:48.534227 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-kube-api-access-ccf2d" (OuterVolumeSpecName: "kube-api-access-ccf2d") pod "6a4da12c-43ff-4174-8638-aebbbd591384" (UID: "6a4da12c-43ff-4174-8638-aebbbd591384"). InnerVolumeSpecName "kube-api-access-ccf2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629595 1925 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccf2d\" (UniqueName: \"kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-kube-api-access-ccf2d\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629618 1925 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nd48d\" (UniqueName: \"kubernetes.io/projected/effa7e08-bad7-49a3-b9a6-4055cd985be1-kube-api-access-nd48d\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629628 1925 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629638 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629644 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/effa7e08-bad7-49a3-b9a6-4055cd985be1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629652 1925 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a4da12c-43ff-4174-8638-aebbbd591384-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629659 1925 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a4da12c-43ff-4174-8638-aebbbd591384-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.629676 kubelet[1925]: I0715 11:30:48.629665 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a4da12c-43ff-4174-8638-aebbbd591384-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:48.630035 kubelet[1925]: I0715 11:30:48.629672 1925 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a4da12c-43ff-4174-8638-aebbbd591384-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:49.049008 kubelet[1925]: I0715 11:30:49.048972 1925 scope.go:117] "RemoveContainer" containerID="84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967" Jul 15 11:30:49.050780 env[1207]: time="2025-07-15T11:30:49.050747427Z" level=info msg="RemoveContainer for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\"" Jul 15 11:30:49.052723 systemd[1]: Removed slice kubepods-besteffort-podeffa7e08_bad7_49a3_b9a6_4055cd985be1.slice. Jul 15 11:30:49.056527 systemd[1]: Removed slice kubepods-burstable-pod6a4da12c_43ff_4174_8638_aebbbd591384.slice. Jul 15 11:30:49.056600 systemd[1]: kubepods-burstable-pod6a4da12c_43ff_4174_8638_aebbbd591384.slice: Consumed 5.841s CPU time. Jul 15 11:30:49.086130 env[1207]: time="2025-07-15T11:30:49.086071623Z" level=info msg="RemoveContainer for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" returns successfully" Jul 15 11:30:49.086417 kubelet[1925]: I0715 11:30:49.086374 1925 scope.go:117] "RemoveContainer" containerID="84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967" Jul 15 11:30:49.086800 env[1207]: time="2025-07-15T11:30:49.086728433Z" level=error msg="ContainerStatus for \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\": not found" Jul 15 11:30:49.086969 kubelet[1925]: E0715 11:30:49.086939 1925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\": not found" containerID="84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967" Jul 15 11:30:49.087034 kubelet[1925]: I0715 11:30:49.086978 1925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967"} err="failed to get container status \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\": rpc error: code = NotFound desc = an error occurred when try to find container \"84b41f497a185bf570d70e2b2f09da3a84f85d1bfa9ed2567ffa526b2137b967\": not found" Jul 15 11:30:49.087034 kubelet[1925]: I0715 11:30:49.087015 1925 scope.go:117] "RemoveContainer" containerID="9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc" Jul 15 11:30:49.088511 env[1207]: time="2025-07-15T11:30:49.088286170Z" level=info msg="RemoveContainer for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\"" Jul 15 11:30:49.093668 env[1207]: time="2025-07-15T11:30:49.093618008Z" level=info msg="RemoveContainer for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" returns successfully" Jul 15 11:30:49.093818 kubelet[1925]: I0715 11:30:49.093792 1925 scope.go:117] "RemoveContainer" containerID="51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd" Jul 15 11:30:49.095564 env[1207]: time="2025-07-15T11:30:49.095539944Z" level=info msg="RemoveContainer for \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\"" Jul 15 11:30:49.098492 env[1207]: time="2025-07-15T11:30:49.098456358Z" level=info msg="RemoveContainer for \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\" returns successfully" Jul 15 11:30:49.098638 kubelet[1925]: I0715 11:30:49.098608 1925 scope.go:117] "RemoveContainer" containerID="0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137" Jul 15 11:30:49.099662 env[1207]: time="2025-07-15T11:30:49.099626152Z" level=info msg="RemoveContainer for \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\"" Jul 15 11:30:49.103399 env[1207]: time="2025-07-15T11:30:49.103372429Z" level=info msg="RemoveContainer for \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\" returns successfully" Jul 15 11:30:49.103677 kubelet[1925]: I0715 11:30:49.103624 1925 scope.go:117] "RemoveContainer" containerID="14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b" Jul 15 11:30:49.104991 env[1207]: time="2025-07-15T11:30:49.104957799Z" level=info msg="RemoveContainer for \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\"" Jul 15 11:30:49.107964 env[1207]: time="2025-07-15T11:30:49.107942164Z" level=info msg="RemoveContainer for \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\" returns successfully" Jul 15 11:30:49.108087 kubelet[1925]: I0715 11:30:49.108055 1925 scope.go:117] "RemoveContainer" containerID="10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f" Jul 15 11:30:49.109161 env[1207]: time="2025-07-15T11:30:49.109076690Z" level=info msg="RemoveContainer for \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\"" Jul 15 11:30:49.112021 env[1207]: time="2025-07-15T11:30:49.111983295Z" level=info msg="RemoveContainer for \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\" returns successfully" Jul 15 11:30:49.112204 kubelet[1925]: I0715 11:30:49.112158 1925 scope.go:117] "RemoveContainer" containerID="9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc" Jul 15 11:30:49.112438 env[1207]: time="2025-07-15T11:30:49.112381018Z" level=error msg="ContainerStatus for \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\": not found" Jul 15 11:30:49.112601 kubelet[1925]: E0715 11:30:49.112577 1925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\": not found" containerID="9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc" Jul 15 11:30:49.112656 kubelet[1925]: I0715 11:30:49.112622 1925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc"} err="failed to get container status \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"9641728591bd4dcf19fd5f73c32572a2aa50ce0c466eb12d34e6e7fbae97ffbc\": not found" Jul 15 11:30:49.112656 kubelet[1925]: I0715 11:30:49.112644 1925 scope.go:117] "RemoveContainer" containerID="51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd" Jul 15 11:30:49.112824 env[1207]: time="2025-07-15T11:30:49.112784853Z" level=error msg="ContainerStatus for \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\": not found" Jul 15 11:30:49.112976 kubelet[1925]: E0715 11:30:49.112942 1925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\": not found" containerID="51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd" Jul 15 11:30:49.113028 kubelet[1925]: I0715 11:30:49.112979 1925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd"} err="failed to get container status \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"51bbd9f4ac52ed9df0dee02c4b091f9514918d57d71ddbc19ddab48ae727b4fd\": not found" Jul 15 11:30:49.113028 kubelet[1925]: I0715 11:30:49.112999 1925 scope.go:117] "RemoveContainer" containerID="0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137" Jul 15 11:30:49.113294 env[1207]: time="2025-07-15T11:30:49.113195741Z" level=error msg="ContainerStatus for \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\": not found" Jul 15 11:30:49.113452 kubelet[1925]: E0715 11:30:49.113415 1925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\": not found" containerID="0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137" Jul 15 11:30:49.113452 kubelet[1925]: I0715 11:30:49.113443 1925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137"} err="failed to get container status \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d281bb770e4138238647ec2c08aebb2e34496722c7b0f889c59e2249ba48137\": not found" Jul 15 11:30:49.113452 kubelet[1925]: I0715 11:30:49.113461 1925 scope.go:117] "RemoveContainer" containerID="14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b" Jul 15 11:30:49.113655 env[1207]: time="2025-07-15T11:30:49.113599735Z" level=error msg="ContainerStatus for \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\": not found" Jul 15 11:30:49.113730 kubelet[1925]: E0715 11:30:49.113708 1925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\": not found" containerID="14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b" Jul 15 11:30:49.113780 kubelet[1925]: I0715 11:30:49.113729 1925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b"} err="failed to get container status \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"14abc306f52acc234f25520e6bb1db9eb07f7899e2a82b399f741198c2829d9b\": not found" Jul 15 11:30:49.113780 kubelet[1925]: I0715 11:30:49.113746 1925 scope.go:117] "RemoveContainer" containerID="10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f" Jul 15 11:30:49.113909 env[1207]: time="2025-07-15T11:30:49.113856327Z" level=error msg="ContainerStatus for \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\": not found" Jul 15 11:30:49.114018 kubelet[1925]: E0715 11:30:49.113996 1925 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\": not found" containerID="10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f" Jul 15 11:30:49.114090 kubelet[1925]: I0715 11:30:49.114018 1925 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f"} err="failed to get container status \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"10b532bd059ddf89a89abb054597a1d7eb5e30fd9133f74b3aa7ebb080394e2f\": not found" Jul 15 11:30:49.371672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a-rootfs.mount: Deactivated successfully. Jul 15 11:30:49.371763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcc92d375be40216a6f8fdc061e68ccd54875c542f6148af57b3b7abd56b420a-shm.mount: Deactivated successfully. Jul 15 11:30:49.371815 systemd[1]: var-lib-kubelet-pods-6a4da12c\x2d43ff\x2d4174\x2d8638\x2daebbbd591384-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccf2d.mount: Deactivated successfully. Jul 15 11:30:49.371867 systemd[1]: var-lib-kubelet-pods-effa7e08\x2dbad7\x2d49a3\x2db9a6\x2d4055cd985be1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnd48d.mount: Deactivated successfully. Jul 15 11:30:49.371935 systemd[1]: var-lib-kubelet-pods-6a4da12c\x2d43ff\x2d4174\x2d8638\x2daebbbd591384-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:30:49.371980 systemd[1]: var-lib-kubelet-pods-6a4da12c\x2d43ff\x2d4174\x2d8638\x2daebbbd591384-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:30:49.912953 kubelet[1925]: I0715 11:30:49.912895 1925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4da12c-43ff-4174-8638-aebbbd591384" path="/var/lib/kubelet/pods/6a4da12c-43ff-4174-8638-aebbbd591384/volumes" Jul 15 11:30:49.913611 kubelet[1925]: I0715 11:30:49.913584 1925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="effa7e08-bad7-49a3-b9a6-4055cd985be1" path="/var/lib/kubelet/pods/effa7e08-bad7-49a3-b9a6-4055cd985be1/volumes" Jul 15 11:30:50.334687 sshd[3563]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:50.337391 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:38700.service: Deactivated successfully. Jul 15 11:30:50.337885 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:30:50.339358 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:44286.service. Jul 15 11:30:50.339776 systemd-logind[1191]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:30:50.340727 systemd-logind[1191]: Removed session 22. Jul 15 11:30:50.381503 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 44286 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:50.382617 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:50.386005 systemd-logind[1191]: New session 23 of user core. Jul 15 11:30:50.386740 systemd[1]: Started session-23.scope. Jul 15 11:30:50.937526 kubelet[1925]: E0715 11:30:50.937477 1925 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:30:50.955037 sshd[3724]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:50.957713 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:44286.service: Deactivated successfully. Jul 15 11:30:50.958241 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:30:50.959686 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:44300.service. Jul 15 11:30:50.960160 systemd-logind[1191]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:30:50.961014 systemd-logind[1191]: Removed session 23. Jul 15 11:30:50.987809 systemd[1]: Created slice kubepods-burstable-podb63b2ace_fc97_42ff_a2ca_31ec7b9f42a9.slice. Jul 15 11:30:50.999397 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 44300 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:51.000439 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:51.005015 systemd-logind[1191]: New session 24 of user core. Jul 15 11:30:51.005776 systemd[1]: Started session-24.scope. Jul 15 11:30:51.042004 kubelet[1925]: I0715 11:30:51.041963 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-kernel\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042161 kubelet[1925]: I0715 11:30:51.042028 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-run\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042161 kubelet[1925]: I0715 11:30:51.042046 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hostproc\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042161 kubelet[1925]: I0715 11:30:51.042062 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-cgroup\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042161 kubelet[1925]: I0715 11:30:51.042075 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cni-path\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042161 kubelet[1925]: I0715 11:30:51.042101 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-etc-cni-netd\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042161 kubelet[1925]: I0715 11:30:51.042113 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-lib-modules\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042345 kubelet[1925]: I0715 11:30:51.042127 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-clustermesh-secrets\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042345 kubelet[1925]: I0715 11:30:51.042139 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-ipsec-secrets\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042345 kubelet[1925]: I0715 11:30:51.042153 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-net\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042345 kubelet[1925]: I0715 11:30:51.042179 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-bpf-maps\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042345 kubelet[1925]: I0715 11:30:51.042202 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-xtables-lock\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042345 kubelet[1925]: I0715 11:30:51.042216 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hubble-tls\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042484 kubelet[1925]: I0715 11:30:51.042261 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6t57\" (UniqueName: \"kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-kube-api-access-v6t57\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.042484 kubelet[1925]: I0715 11:30:51.042287 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-config-path\") pod \"cilium-qwkgp\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " pod="kube-system/cilium-qwkgp" Jul 15 11:30:51.120294 sshd[3736]: pam_unix(sshd:session): session closed for user core Jul 15 11:30:51.123423 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:44310.service. Jul 15 11:30:51.132089 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:44300.service: Deactivated successfully. Jul 15 11:30:51.132853 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:30:51.133733 systemd-logind[1191]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:30:51.134479 systemd-logind[1191]: Removed session 24. Jul 15 11:30:51.137177 kubelet[1925]: E0715 11:30:51.137134 1925 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-v6t57 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qwkgp" podUID="b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" Jul 15 11:30:51.171279 sshd[3748]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:UAnaMym03FNQ3Em4JmRfExsPnzWeaW932gzAKk7u+5w Jul 15 11:30:51.172246 sshd[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:30:51.175225 systemd-logind[1191]: New session 25 of user core. Jul 15 11:30:51.176054 systemd[1]: Started session-25.scope. Jul 15 11:30:51.910501 kubelet[1925]: E0715 11:30:51.910461 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:52.148990 kubelet[1925]: I0715 11:30:52.148950 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-lib-modules\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.148990 kubelet[1925]: I0715 11:30:52.148998 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-bpf-maps\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149391 kubelet[1925]: I0715 11:30:52.149026 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hubble-tls\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149391 kubelet[1925]: I0715 11:30:52.149048 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-ipsec-secrets\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149391 kubelet[1925]: I0715 11:30:52.149068 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-etc-cni-netd\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149391 kubelet[1925]: I0715 11:30:52.149065 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.149391 kubelet[1925]: I0715 11:30:52.149084 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cni-path\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149391 kubelet[1925]: I0715 11:30:52.149103 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-kernel\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149530 kubelet[1925]: I0715 11:30:52.149091 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.149530 kubelet[1925]: I0715 11:30:52.149119 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hostproc\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149530 kubelet[1925]: I0715 11:30:52.149139 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-clustermesh-secrets\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149530 kubelet[1925]: I0715 11:30:52.149159 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-net\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149530 kubelet[1925]: I0715 11:30:52.149179 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6t57\" (UniqueName: \"kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-kube-api-access-v6t57\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149530 kubelet[1925]: I0715 11:30:52.149201 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-xtables-lock\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149674 kubelet[1925]: I0715 11:30:52.149218 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-run\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149674 kubelet[1925]: I0715 11:30:52.149239 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-cgroup\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149674 kubelet[1925]: I0715 11:30:52.149277 1925 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-config-path\") pod \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\" (UID: \"b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9\") " Jul 15 11:30:52.149674 kubelet[1925]: I0715 11:30:52.149314 1925 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.149674 kubelet[1925]: I0715 11:30:52.149326 1925 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.150085 kubelet[1925]: I0715 11:30:52.149139 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cni-path" (OuterVolumeSpecName: "cni-path") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150085 kubelet[1925]: I0715 11:30:52.149151 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150167 kubelet[1925]: I0715 11:30:52.149831 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150167 kubelet[1925]: I0715 11:30:52.149849 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150167 kubelet[1925]: I0715 11:30:52.149872 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hostproc" (OuterVolumeSpecName: "hostproc") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150167 kubelet[1925]: I0715 11:30:52.150086 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150167 kubelet[1925]: I0715 11:30:52.150130 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.150304 kubelet[1925]: I0715 11:30:52.150154 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 11:30:52.151323 kubelet[1925]: I0715 11:30:52.151298 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 11:30:52.153103 systemd[1]: var-lib-kubelet-pods-b63b2ace\x2dfc97\x2d42ff\x2da2ca\x2d31ec7b9f42a9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:30:52.153200 systemd[1]: var-lib-kubelet-pods-b63b2ace\x2dfc97\x2d42ff\x2da2ca\x2d31ec7b9f42a9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 15 11:30:52.153844 kubelet[1925]: I0715 11:30:52.153823 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:30:52.154088 kubelet[1925]: I0715 11:30:52.154035 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-kube-api-access-v6t57" (OuterVolumeSpecName: "kube-api-access-v6t57") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "kube-api-access-v6t57". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:30:52.154412 kubelet[1925]: I0715 11:30:52.154391 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 11:30:52.154533 kubelet[1925]: I0715 11:30:52.154420 1925 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" (UID: "b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 11:30:52.155278 systemd[1]: var-lib-kubelet-pods-b63b2ace\x2dfc97\x2d42ff\x2da2ca\x2d31ec7b9f42a9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv6t57.mount: Deactivated successfully. Jul 15 11:30:52.155388 systemd[1]: var-lib-kubelet-pods-b63b2ace\x2dfc97\x2d42ff\x2da2ca\x2d31ec7b9f42a9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:30:52.249922 kubelet[1925]: I0715 11:30:52.249876 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.249922 kubelet[1925]: I0715 11:30:52.249909 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.249922 kubelet[1925]: I0715 11:30:52.249926 1925 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.249936 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.249958 1925 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.249969 1925 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.249980 1925 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.249989 1925 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.249998 1925 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.250008 1925 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250136 kubelet[1925]: I0715 11:30:52.250018 1925 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6t57\" (UniqueName: \"kubernetes.io/projected/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-kube-api-access-v6t57\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250340 kubelet[1925]: I0715 11:30:52.250029 1925 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:52.250340 kubelet[1925]: I0715 11:30:52.250038 1925 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:30:53.065656 systemd[1]: Removed slice kubepods-burstable-podb63b2ace_fc97_42ff_a2ca_31ec7b9f42a9.slice. Jul 15 11:30:53.107441 systemd[1]: Created slice kubepods-burstable-pod758d42cc_a790_42b7_aaaf_46d6fd83658e.slice. Jul 15 11:30:53.153164 kubelet[1925]: I0715 11:30:53.153098 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-cilium-run\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153164 kubelet[1925]: I0715 11:30:53.153140 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-cilium-cgroup\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153164 kubelet[1925]: I0715 11:30:53.153154 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-host-proc-sys-net\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153164 kubelet[1925]: I0715 11:30:53.153172 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-lib-modules\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153599 kubelet[1925]: I0715 11:30:53.153184 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-xtables-lock\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153599 kubelet[1925]: I0715 11:30:53.153242 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/758d42cc-a790-42b7-aaaf-46d6fd83658e-cilium-ipsec-secrets\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153599 kubelet[1925]: I0715 11:30:53.153310 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-cni-path\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153599 kubelet[1925]: I0715 11:30:53.153344 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-hostproc\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153599 kubelet[1925]: I0715 11:30:53.153359 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-etc-cni-netd\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153599 kubelet[1925]: I0715 11:30:53.153378 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-bpf-maps\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153761 kubelet[1925]: I0715 11:30:53.153397 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/758d42cc-a790-42b7-aaaf-46d6fd83658e-hubble-tls\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153761 kubelet[1925]: I0715 11:30:53.153432 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/758d42cc-a790-42b7-aaaf-46d6fd83658e-clustermesh-secrets\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153761 kubelet[1925]: I0715 11:30:53.153468 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/758d42cc-a790-42b7-aaaf-46d6fd83658e-cilium-config-path\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153761 kubelet[1925]: I0715 11:30:53.153487 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/758d42cc-a790-42b7-aaaf-46d6fd83658e-host-proc-sys-kernel\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.153761 kubelet[1925]: I0715 11:30:53.153507 1925 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2x7m\" (UniqueName: \"kubernetes.io/projected/758d42cc-a790-42b7-aaaf-46d6fd83658e-kube-api-access-f2x7m\") pod \"cilium-p5jcm\" (UID: \"758d42cc-a790-42b7-aaaf-46d6fd83658e\") " pod="kube-system/cilium-p5jcm" Jul 15 11:30:53.411054 kubelet[1925]: E0715 11:30:53.410898 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:53.411649 env[1207]: time="2025-07-15T11:30:53.411477848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5jcm,Uid:758d42cc-a790-42b7-aaaf-46d6fd83658e,Namespace:kube-system,Attempt:0,}" Jul 15 11:30:53.423867 env[1207]: time="2025-07-15T11:30:53.423778208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:30:53.423867 env[1207]: time="2025-07-15T11:30:53.423832401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:30:53.423867 env[1207]: time="2025-07-15T11:30:53.423845857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:30:53.424181 env[1207]: time="2025-07-15T11:30:53.424128709Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a pid=3780 runtime=io.containerd.runc.v2 Jul 15 11:30:53.434096 systemd[1]: Started cri-containerd-0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a.scope. Jul 15 11:30:53.456755 env[1207]: time="2025-07-15T11:30:53.456711405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5jcm,Uid:758d42cc-a790-42b7-aaaf-46d6fd83658e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\"" Jul 15 11:30:53.458144 kubelet[1925]: E0715 11:30:53.457741 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:53.719177 env[1207]: time="2025-07-15T11:30:53.719128325Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:30:53.733797 env[1207]: time="2025-07-15T11:30:53.733732064Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a\"" Jul 15 11:30:53.734362 env[1207]: time="2025-07-15T11:30:53.734324938Z" level=info msg="StartContainer for \"b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a\"" Jul 15 11:30:53.748030 systemd[1]: Started cri-containerd-b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a.scope. Jul 15 11:30:53.772667 env[1207]: time="2025-07-15T11:30:53.771449516Z" level=info msg="StartContainer for \"b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a\" returns successfully" Jul 15 11:30:53.776656 systemd[1]: cri-containerd-b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a.scope: Deactivated successfully. Jul 15 11:30:53.803660 env[1207]: time="2025-07-15T11:30:53.803590767Z" level=info msg="shim disconnected" id=b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a Jul 15 11:30:53.803660 env[1207]: time="2025-07-15T11:30:53.803648588Z" level=warning msg="cleaning up after shim disconnected" id=b575c3d72f523246dc145d0433b35b9f9b7cb3b126c06771844adf45a02af13a namespace=k8s.io Jul 15 11:30:53.803660 env[1207]: time="2025-07-15T11:30:53.803660080Z" level=info msg="cleaning up dead shim" Jul 15 11:30:53.810814 env[1207]: time="2025-07-15T11:30:53.810758044Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3864 runtime=io.containerd.runc.v2\n" Jul 15 11:30:53.912904 kubelet[1925]: I0715 11:30:53.912871 1925 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9" path="/var/lib/kubelet/pods/b63b2ace-fc97-42ff-a2ca-31ec7b9f42a9/volumes" Jul 15 11:30:54.065479 kubelet[1925]: E0715 11:30:54.065385 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:54.069718 env[1207]: time="2025-07-15T11:30:54.069681870Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:30:54.081728 env[1207]: time="2025-07-15T11:30:54.081676212Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31\"" Jul 15 11:30:54.082163 env[1207]: time="2025-07-15T11:30:54.082139719Z" level=info msg="StartContainer for \"182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31\"" Jul 15 11:30:54.095039 systemd[1]: Started cri-containerd-182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31.scope. Jul 15 11:30:54.121412 env[1207]: time="2025-07-15T11:30:54.121361372Z" level=info msg="StartContainer for \"182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31\" returns successfully" Jul 15 11:30:54.126185 systemd[1]: cri-containerd-182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31.scope: Deactivated successfully. Jul 15 11:30:54.384185 env[1207]: time="2025-07-15T11:30:54.384065268Z" level=info msg="shim disconnected" id=182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31 Jul 15 11:30:54.384185 env[1207]: time="2025-07-15T11:30:54.384119802Z" level=warning msg="cleaning up after shim disconnected" id=182b7b23b429a002e8a1bbe4ff01dc634bb6527888a0d764e151f9f3d6c54f31 namespace=k8s.io Jul 15 11:30:54.384185 env[1207]: time="2025-07-15T11:30:54.384134760Z" level=info msg="cleaning up dead shim" Jul 15 11:30:54.390781 env[1207]: time="2025-07-15T11:30:54.390729625Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3925 runtime=io.containerd.runc.v2\n" Jul 15 11:30:55.068497 kubelet[1925]: E0715 11:30:55.068468 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:55.072892 env[1207]: time="2025-07-15T11:30:55.072839904Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:30:55.088762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123754565.mount: Deactivated successfully. Jul 15 11:30:55.095679 env[1207]: time="2025-07-15T11:30:55.095633211Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65\"" Jul 15 11:30:55.096201 env[1207]: time="2025-07-15T11:30:55.096146642Z" level=info msg="StartContainer for \"a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65\"" Jul 15 11:30:55.113351 systemd[1]: Started cri-containerd-a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65.scope. Jul 15 11:30:55.143317 env[1207]: time="2025-07-15T11:30:55.141677489Z" level=info msg="StartContainer for \"a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65\" returns successfully" Jul 15 11:30:55.144037 systemd[1]: cri-containerd-a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65.scope: Deactivated successfully. Jul 15 11:30:55.165727 env[1207]: time="2025-07-15T11:30:55.165664508Z" level=info msg="shim disconnected" id=a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65 Jul 15 11:30:55.165727 env[1207]: time="2025-07-15T11:30:55.165721296Z" level=warning msg="cleaning up after shim disconnected" id=a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65 namespace=k8s.io Jul 15 11:30:55.165727 env[1207]: time="2025-07-15T11:30:55.165735394Z" level=info msg="cleaning up dead shim" Jul 15 11:30:55.171845 env[1207]: time="2025-07-15T11:30:55.171795460Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3981 runtime=io.containerd.runc.v2\n" Jul 15 11:30:55.258831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3eb0124cf82c94accb44d3a9cb32e0725a82cd57905618e14506d0d4bad4b65-rootfs.mount: Deactivated successfully. Jul 15 11:30:55.938537 kubelet[1925]: E0715 11:30:55.938492 1925 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:30:56.071152 kubelet[1925]: E0715 11:30:56.071121 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:56.075457 env[1207]: time="2025-07-15T11:30:56.075410733Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:30:56.085590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159966569.mount: Deactivated successfully. Jul 15 11:30:56.088182 env[1207]: time="2025-07-15T11:30:56.088129516Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0\"" Jul 15 11:30:56.088630 env[1207]: time="2025-07-15T11:30:56.088605646Z" level=info msg="StartContainer for \"12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0\"" Jul 15 11:30:56.104042 systemd[1]: Started cri-containerd-12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0.scope. Jul 15 11:30:56.120432 systemd[1]: cri-containerd-12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0.scope: Deactivated successfully. Jul 15 11:30:56.121626 env[1207]: time="2025-07-15T11:30:56.121580607Z" level=info msg="StartContainer for \"12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0\" returns successfully" Jul 15 11:30:56.142569 env[1207]: time="2025-07-15T11:30:56.142507004Z" level=info msg="shim disconnected" id=12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0 Jul 15 11:30:56.142569 env[1207]: time="2025-07-15T11:30:56.142552632Z" level=warning msg="cleaning up after shim disconnected" id=12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0 namespace=k8s.io Jul 15 11:30:56.142569 env[1207]: time="2025-07-15T11:30:56.142560467Z" level=info msg="cleaning up dead shim" Jul 15 11:30:56.149052 env[1207]: time="2025-07-15T11:30:56.149005542Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:30:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" Jul 15 11:30:56.258637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12c6c8fe59acb0f228240c4a3cd2546d9efe58afe6be193608b68d939db592c0-rootfs.mount: Deactivated successfully. Jul 15 11:30:57.074522 kubelet[1925]: E0715 11:30:57.074492 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:57.078956 env[1207]: time="2025-07-15T11:30:57.078909855Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:30:57.093691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322988981.mount: Deactivated successfully. Jul 15 11:30:57.098152 env[1207]: time="2025-07-15T11:30:57.098096972Z" level=info msg="CreateContainer within sandbox \"0b18ee4f73aa14fce81ee7f4d7aca5121efee0340d6b888fb32fa3aa1ad2d34a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4027442c72cce182c0b7277ccfedf636793b761524caf05cfe33e55434655c19\"" Jul 15 11:30:57.098609 env[1207]: time="2025-07-15T11:30:57.098574073Z" level=info msg="StartContainer for \"4027442c72cce182c0b7277ccfedf636793b761524caf05cfe33e55434655c19\"" Jul 15 11:30:57.111501 systemd[1]: Started cri-containerd-4027442c72cce182c0b7277ccfedf636793b761524caf05cfe33e55434655c19.scope. Jul 15 11:30:57.140965 env[1207]: time="2025-07-15T11:30:57.140890236Z" level=info msg="StartContainer for \"4027442c72cce182c0b7277ccfedf636793b761524caf05cfe33e55434655c19\" returns successfully" Jul 15 11:30:57.394280 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 15 11:30:58.080665 kubelet[1925]: E0715 11:30:58.080640 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:58.087243 kubelet[1925]: I0715 11:30:58.087206 1925 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T11:30:58Z","lastTransitionTime":"2025-07-15T11:30:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 11:30:58.093811 kubelet[1925]: I0715 11:30:58.093747 1925 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p5jcm" podStartSLOduration=5.093731095 podStartE2EDuration="5.093731095s" podCreationTimestamp="2025-07-15 11:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:30:58.093133515 +0000 UTC m=+82.251542098" watchObservedRunningTime="2025-07-15 11:30:58.093731095 +0000 UTC m=+82.252139678" Jul 15 11:30:59.412031 kubelet[1925]: E0715 11:30:59.411975 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:30:59.867353 systemd-networkd[1027]: lxc_health: Link UP Jul 15 11:30:59.879278 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:30:59.879686 systemd-networkd[1027]: lxc_health: Gained carrier Jul 15 11:31:01.413078 kubelet[1925]: E0715 11:31:01.413046 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:31:01.427501 systemd-networkd[1027]: lxc_health: Gained IPv6LL Jul 15 11:31:02.086798 kubelet[1925]: E0715 11:31:02.086766 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:31:03.088647 kubelet[1925]: E0715 11:31:03.088621 1925 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:31:03.569975 systemd[1]: run-containerd-runc-k8s.io-4027442c72cce182c0b7277ccfedf636793b761524caf05cfe33e55434655c19-runc.ZtwEoK.mount: Deactivated successfully. Jul 15 11:31:05.655229 systemd[1]: run-containerd-runc-k8s.io-4027442c72cce182c0b7277ccfedf636793b761524caf05cfe33e55434655c19-runc.ATlRdP.mount: Deactivated successfully. Jul 15 11:31:05.695926 sshd[3748]: pam_unix(sshd:session): session closed for user core Jul 15 11:31:05.698229 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:44310.service: Deactivated successfully. Jul 15 11:31:05.698884 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:31:05.699405 systemd-logind[1191]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:31:05.699983 systemd-logind[1191]: Removed session 25.