Jul 10 00:34:44.034989 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed Jul 9 23:09:45 -00 2025 Jul 10 00:34:44.035010 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:34:44.035020 kernel: BIOS-provided physical RAM map: Jul 10 00:34:44.035026 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:34:44.035031 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 00:34:44.035037 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 00:34:44.035043 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 00:34:44.035049 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 00:34:44.035055 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 10 00:34:44.035061 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 10 00:34:44.035067 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 10 00:34:44.035073 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 10 00:34:44.035078 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 10 00:34:44.035084 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 00:34:44.035091 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 10 00:34:44.035098 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 10 00:34:44.035104 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 00:34:44.035110 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 10 00:34:44.035119 kernel: NX (Execute Disable) protection: active Jul 10 00:34:44.035125 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 10 00:34:44.035131 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Jul 10 00:34:44.035137 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 10 00:34:44.035143 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Jul 10 00:34:44.035149 kernel: extended physical RAM map: Jul 10 00:34:44.035155 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:34:44.035162 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 00:34:44.035168 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 00:34:44.035174 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 00:34:44.035180 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 00:34:44.035186 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 10 00:34:44.035192 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 10 00:34:44.035198 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Jul 10 00:34:44.035204 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Jul 10 00:34:44.035210 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Jul 10 00:34:44.035215 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Jul 10 00:34:44.035221 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Jul 10 00:34:44.035228 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 10 00:34:44.035234 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 10 00:34:44.035240 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 00:34:44.035246 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 10 00:34:44.035255 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 10 00:34:44.035262 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 00:34:44.035268 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 10 00:34:44.035275 kernel: efi: EFI v2.70 by EDK II Jul 10 00:34:44.035282 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Jul 10 00:34:44.035288 kernel: random: crng init done Jul 10 00:34:44.035295 kernel: SMBIOS 2.8 present. Jul 10 00:34:44.035301 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 10 00:34:44.035308 kernel: Hypervisor detected: KVM Jul 10 00:34:44.035314 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:34:44.035320 kernel: kvm-clock: cpu 0, msr 5119a001, primary cpu clock Jul 10 00:34:44.035327 kernel: kvm-clock: using sched offset of 4933478805 cycles Jul 10 00:34:44.035338 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:34:44.035345 kernel: tsc: Detected 2794.746 MHz processor Jul 10 00:34:44.035351 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:34:44.035358 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:34:44.035365 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 10 00:34:44.035383 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:34:44.035390 kernel: Using GB pages for direct mapping Jul 10 00:34:44.035396 kernel: Secure boot disabled Jul 10 00:34:44.035403 kernel: ACPI: Early table checksum verification disabled Jul 10 00:34:44.035411 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 10 00:34:44.035417 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:34:44.035424 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:44.035431 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:44.035440 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 10 00:34:44.035446 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:44.035453 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:44.035462 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:44.035469 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:44.035477 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 00:34:44.035484 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 10 00:34:44.035490 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 10 00:34:44.035497 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 10 00:34:44.035504 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 10 00:34:44.035510 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 10 00:34:44.035517 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 10 00:34:44.035523 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 10 00:34:44.035530 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 10 00:34:44.035537 kernel: No NUMA configuration found Jul 10 00:34:44.035544 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 10 00:34:44.035551 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 10 00:34:44.035557 kernel: Zone ranges: Jul 10 00:34:44.035564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:34:44.035571 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 10 00:34:44.035577 kernel: Normal empty Jul 10 00:34:44.035584 kernel: Movable zone start for each node Jul 10 00:34:44.035590 kernel: Early memory node ranges Jul 10 00:34:44.035598 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:34:44.035605 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 10 00:34:44.035611 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 10 00:34:44.035618 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 10 00:34:44.035624 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 10 00:34:44.035630 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 10 00:34:44.035637 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 10 00:34:44.035643 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:34:44.035650 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:34:44.035656 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 10 00:34:44.035664 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:34:44.035671 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 10 00:34:44.035678 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 10 00:34:44.035684 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 10 00:34:44.035691 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:34:44.035697 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:34:44.035704 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:34:44.035710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:34:44.035717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:34:44.035725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:34:44.035731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:34:44.035738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:34:44.035747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:34:44.035756 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:34:44.035763 kernel: TSC deadline timer available Jul 10 00:34:44.035769 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 10 00:34:44.035776 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:34:44.035782 kernel: kvm-guest: setup PV sched yield Jul 10 00:34:44.035790 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 10 00:34:44.035797 kernel: Booting paravirtualized kernel on KVM Jul 10 00:34:44.035809 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:34:44.035817 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:34:44.035824 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 10 00:34:44.035831 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 10 00:34:44.035837 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:34:44.035844 kernel: kvm-guest: setup async PF for cpu 0 Jul 10 00:34:44.035851 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Jul 10 00:34:44.035858 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:34:44.035865 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:34:44.035873 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 10 00:34:44.035880 kernel: Policy zone: DMA32 Jul 10 00:34:44.035888 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:34:44.035895 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:34:44.035902 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:34:44.035911 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:34:44.035918 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:34:44.035925 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2275K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) Jul 10 00:34:44.035932 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:34:44.035939 kernel: ftrace: allocating 34602 entries in 136 pages Jul 10 00:34:44.035952 kernel: ftrace: allocated 136 pages with 2 groups Jul 10 00:34:44.035960 kernel: rcu: Hierarchical RCU implementation. Jul 10 00:34:44.035967 kernel: rcu: RCU event tracing is enabled. Jul 10 00:34:44.035975 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:34:44.035982 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:34:44.035990 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:34:44.035998 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:34:44.036005 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:34:44.036011 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:34:44.036018 kernel: Console: colour dummy device 80x25 Jul 10 00:34:44.036025 kernel: printk: console [ttyS0] enabled Jul 10 00:34:44.036032 kernel: ACPI: Core revision 20210730 Jul 10 00:34:44.036039 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:34:44.036047 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:34:44.036054 kernel: x2apic enabled Jul 10 00:34:44.036061 kernel: Switched APIC routing to physical x2apic. Jul 10 00:34:44.036068 kernel: kvm-guest: setup PV IPIs Jul 10 00:34:44.036075 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:34:44.036082 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 10 00:34:44.036089 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 10 00:34:44.036095 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:34:44.036105 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:34:44.036114 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:34:44.036121 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:34:44.036127 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:34:44.036134 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:34:44.036141 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:34:44.036148 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:34:44.036155 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:34:44.036165 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 10 00:34:44.036173 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:34:44.036180 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:34:44.036187 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:34:44.036194 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:34:44.036201 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 10 00:34:44.036208 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:34:44.036215 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:34:44.036221 kernel: LSM: Security Framework initializing Jul 10 00:34:44.036228 kernel: SELinux: Initializing. Jul 10 00:34:44.036236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:34:44.036243 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:34:44.036250 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:34:44.036257 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:34:44.036264 kernel: ... version: 0 Jul 10 00:34:44.036271 kernel: ... bit width: 48 Jul 10 00:34:44.036278 kernel: ... generic registers: 6 Jul 10 00:34:44.036285 kernel: ... value mask: 0000ffffffffffff Jul 10 00:34:44.036292 kernel: ... max period: 00007fffffffffff Jul 10 00:34:44.036300 kernel: ... fixed-purpose events: 0 Jul 10 00:34:44.036307 kernel: ... event mask: 000000000000003f Jul 10 00:34:44.036313 kernel: signal: max sigframe size: 1776 Jul 10 00:34:44.036320 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:34:44.036327 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:34:44.036334 kernel: x86: Booting SMP configuration: Jul 10 00:34:44.036341 kernel: .... node #0, CPUs: #1 Jul 10 00:34:44.036348 kernel: kvm-clock: cpu 1, msr 5119a041, secondary cpu clock Jul 10 00:34:44.036354 kernel: kvm-guest: setup async PF for cpu 1 Jul 10 00:34:44.036361 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Jul 10 00:34:44.036380 kernel: #2 Jul 10 00:34:44.036389 kernel: kvm-clock: cpu 2, msr 5119a081, secondary cpu clock Jul 10 00:34:44.036397 kernel: kvm-guest: setup async PF for cpu 2 Jul 10 00:34:44.036406 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Jul 10 00:34:44.036414 kernel: #3 Jul 10 00:34:44.036427 kernel: kvm-clock: cpu 3, msr 5119a0c1, secondary cpu clock Jul 10 00:34:44.036435 kernel: kvm-guest: setup async PF for cpu 3 Jul 10 00:34:44.036443 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Jul 10 00:34:44.036451 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:34:44.036465 kernel: smpboot: Max logical packages: 1 Jul 10 00:34:44.036474 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 10 00:34:44.036482 kernel: devtmpfs: initialized Jul 10 00:34:44.036491 kernel: x86/mm: Memory block size: 128MB Jul 10 00:34:44.036500 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 10 00:34:44.036509 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 10 00:34:44.036518 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 10 00:34:44.036527 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 10 00:34:44.036536 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 10 00:34:44.036547 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:34:44.036556 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:34:44.036565 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:34:44.036574 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:34:44.036583 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:34:44.036591 kernel: audit: type=2000 audit(1752107683.311:1): state=initialized audit_enabled=0 res=1 Jul 10 00:34:44.036598 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:34:44.036605 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:34:44.036612 kernel: cpuidle: using governor menu Jul 10 00:34:44.036621 kernel: ACPI: bus type PCI registered Jul 10 00:34:44.036628 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:34:44.036634 kernel: dca service started, version 1.12.1 Jul 10 00:34:44.036642 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 10 00:34:44.036649 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 10 00:34:44.036656 kernel: PCI: Using configuration type 1 for base access Jul 10 00:34:44.036663 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:34:44.036670 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:34:44.036677 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:34:44.036685 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:34:44.036692 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:34:44.036699 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:34:44.036706 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:34:44.036713 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:34:44.036720 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:34:44.036727 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:34:44.036734 kernel: ACPI: Interpreter enabled Jul 10 00:34:44.036740 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 00:34:44.036748 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:34:44.036755 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:34:44.036762 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:34:44.036770 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:34:44.036938 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:34:44.037030 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:34:44.037104 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:34:44.037115 kernel: PCI host bridge to bus 0000:00 Jul 10 00:34:44.037199 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:34:44.037276 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:34:44.037344 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:34:44.037423 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 10 00:34:44.037489 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 00:34:44.037553 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 10 00:34:44.037621 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:34:44.037719 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 10 00:34:44.037831 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 10 00:34:44.037908 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 10 00:34:44.037991 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 10 00:34:44.038064 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 10 00:34:44.038138 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 10 00:34:44.038215 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:34:44.038310 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:34:44.038410 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 10 00:34:44.038488 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 10 00:34:44.038562 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 10 00:34:44.038668 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 10 00:34:44.038748 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 10 00:34:44.038822 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 10 00:34:44.038896 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 10 00:34:44.038995 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 10 00:34:44.039071 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 10 00:34:44.039144 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 10 00:34:44.039215 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 10 00:34:44.039300 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 10 00:34:44.039459 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 10 00:34:44.039536 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:34:44.039641 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 10 00:34:44.039716 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 10 00:34:44.039787 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 10 00:34:44.039878 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 10 00:34:44.039965 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 10 00:34:44.039975 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:34:44.039983 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:34:44.039990 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:34:44.039997 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:34:44.040004 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:34:44.040011 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:34:44.040018 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:34:44.040027 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:34:44.040034 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:34:44.040041 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:34:44.040048 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:34:44.040055 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:34:44.040062 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:34:44.040069 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:34:44.040076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:34:44.040083 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:34:44.040091 kernel: iommu: Default domain type: Translated Jul 10 00:34:44.040098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:34:44.040188 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:34:44.040288 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:34:44.040402 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:34:44.040413 kernel: vgaarb: loaded Jul 10 00:34:44.040421 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:34:44.040428 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:34:44.040435 kernel: PTP clock support registered Jul 10 00:34:44.040458 kernel: Registered efivars operations Jul 10 00:34:44.040465 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:34:44.040472 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:34:44.040479 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 10 00:34:44.040486 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 10 00:34:44.040493 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Jul 10 00:34:44.040512 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Jul 10 00:34:44.040519 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 10 00:34:44.040526 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 10 00:34:44.040535 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:34:44.040543 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:34:44.040550 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:34:44.040568 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:34:44.040577 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:34:44.040584 kernel: pnp: PnP ACPI init Jul 10 00:34:44.040719 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 10 00:34:44.040732 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:34:44.040742 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:34:44.040749 kernel: NET: Registered PF_INET protocol family Jul 10 00:34:44.040756 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:34:44.040763 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:34:44.040770 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:34:44.040777 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:34:44.040785 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:34:44.040792 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:34:44.040799 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:34:44.040807 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:34:44.040814 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:34:44.040823 kernel: NET: Registered PF_XDP protocol family Jul 10 00:34:44.040930 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 10 00:34:44.041035 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 10 00:34:44.041106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:34:44.041192 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:34:44.041278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:34:44.041407 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 10 00:34:44.041499 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 10 00:34:44.041584 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 10 00:34:44.041595 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:34:44.041602 kernel: Initialise system trusted keyrings Jul 10 00:34:44.041609 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:34:44.041616 kernel: Key type asymmetric registered Jul 10 00:34:44.041623 kernel: Asymmetric key parser 'x509' registered Jul 10 00:34:44.041633 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:34:44.041640 kernel: io scheduler mq-deadline registered Jul 10 00:34:44.041647 kernel: io scheduler kyber registered Jul 10 00:34:44.041665 kernel: io scheduler bfq registered Jul 10 00:34:44.041674 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:34:44.041682 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:34:44.041691 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:34:44.041698 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:34:44.041705 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:34:44.041714 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:34:44.041722 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:34:44.041729 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:34:44.041736 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:34:44.041837 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:34:44.041850 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:34:44.041943 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:34:44.042069 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:34:43 UTC (1752107683) Jul 10 00:34:44.042182 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 10 00:34:44.042194 kernel: efifb: probing for efifb Jul 10 00:34:44.042201 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 00:34:44.042209 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 00:34:44.042216 kernel: efifb: scrolling: redraw Jul 10 00:34:44.042223 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:34:44.042231 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 00:34:44.042238 kernel: fb0: EFI VGA frame buffer device Jul 10 00:34:44.042261 kernel: pstore: Registered efi as persistent store backend Jul 10 00:34:44.042275 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:34:44.042284 kernel: Segment Routing with IPv6 Jul 10 00:34:44.042294 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:34:44.042304 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:34:44.042313 kernel: Key type dns_resolver registered Jul 10 00:34:44.042320 kernel: IPI shorthand broadcast: enabled Jul 10 00:34:44.042342 kernel: sched_clock: Marking stable (481183474, 124853388)->(678959276, -72922414) Jul 10 00:34:44.042350 kernel: registered taskstats version 1 Jul 10 00:34:44.042358 kernel: Loading compiled-in X.509 certificates Jul 10 00:34:44.042365 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 6ebecdd7757c0df63fc51731f0b99957f4e4af16' Jul 10 00:34:44.042385 kernel: Key type .fscrypt registered Jul 10 00:34:44.042392 kernel: Key type fscrypt-provisioning registered Jul 10 00:34:44.042412 kernel: pstore: Using crash dump compression: deflate Jul 10 00:34:44.042420 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:34:44.042428 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:34:44.042439 kernel: ima: No architecture policies found Jul 10 00:34:44.042446 kernel: clk: Disabling unused clocks Jul 10 00:34:44.042454 kernel: Freeing unused kernel image (initmem) memory: 47472K Jul 10 00:34:44.042461 kernel: Write protecting the kernel read-only data: 28672k Jul 10 00:34:44.042481 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 10 00:34:44.042489 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Jul 10 00:34:44.042496 kernel: Run /init as init process Jul 10 00:34:44.042503 kernel: with arguments: Jul 10 00:34:44.042510 kernel: /init Jul 10 00:34:44.042520 kernel: with environment: Jul 10 00:34:44.042539 kernel: HOME=/ Jul 10 00:34:44.042546 kernel: TERM=linux Jul 10 00:34:44.042553 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:34:44.042563 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:34:44.042573 systemd[1]: Detected virtualization kvm. Jul 10 00:34:44.042581 systemd[1]: Detected architecture x86-64. Jul 10 00:34:44.042600 systemd[1]: Running in initrd. Jul 10 00:34:44.042611 systemd[1]: No hostname configured, using default hostname. Jul 10 00:34:44.042618 systemd[1]: Hostname set to . Jul 10 00:34:44.042627 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:34:44.042634 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:34:44.042654 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:34:44.042663 systemd[1]: Reached target cryptsetup.target. Jul 10 00:34:44.042671 systemd[1]: Reached target paths.target. Jul 10 00:34:44.042678 systemd[1]: Reached target slices.target. Jul 10 00:34:44.042688 systemd[1]: Reached target swap.target. Jul 10 00:34:44.042707 systemd[1]: Reached target timers.target. Jul 10 00:34:44.042717 systemd[1]: Listening on iscsid.socket. Jul 10 00:34:44.042724 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:34:44.042732 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:34:44.042740 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:34:44.042748 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:34:44.042759 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:34:44.042767 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:34:44.042775 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:34:44.042782 systemd[1]: Reached target sockets.target. Jul 10 00:34:44.042790 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:34:44.042798 systemd[1]: Finished network-cleanup.service. Jul 10 00:34:44.042805 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:34:44.042813 systemd[1]: Starting systemd-journald.service... Jul 10 00:34:44.042821 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:34:44.042830 systemd[1]: Starting systemd-resolved.service... Jul 10 00:34:44.042840 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:34:44.042848 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:34:44.042856 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:34:44.042863 kernel: audit: type=1130 audit(1752107684.027:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.042872 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:34:44.042879 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:34:44.042887 kernel: audit: type=1130 audit(1752107684.035:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.042897 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:34:44.042911 systemd-journald[198]: Journal started Jul 10 00:34:44.043055 systemd-journald[198]: Runtime Journal (/run/log/journal/0b24c5252eb24828b2ec83f489c3559e) is 6.0M, max 48.4M, 42.4M free. Jul 10 00:34:44.043096 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:34:44.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.046400 systemd[1]: Started systemd-journald.service. Jul 10 00:34:44.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.050347 systemd-modules-load[199]: Inserted module 'overlay' Jul 10 00:34:44.050775 kernel: audit: type=1130 audit(1752107684.044:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.052442 systemd-resolved[200]: Positive Trust Anchors: Jul 10 00:34:44.054563 kernel: audit: type=1130 audit(1752107684.045:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.052454 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:34:44.052496 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:34:44.055814 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 10 00:34:44.057005 systemd[1]: Started systemd-resolved.service. Jul 10 00:34:44.057636 systemd[1]: Reached target nss-lookup.target. Jul 10 00:34:44.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.061397 kernel: audit: type=1130 audit(1752107684.057:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.062538 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:34:44.066748 kernel: audit: type=1130 audit(1752107684.062:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.065904 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:34:44.074732 dracut-cmdline[215]: dracut-dracut-053 Jul 10 00:34:44.076687 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:34:44.103414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:34:44.107917 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 10 00:34:44.109009 kernel: Bridge firewalling registered Jul 10 00:34:44.126405 kernel: SCSI subsystem initialized Jul 10 00:34:44.138534 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:34:44.138595 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:34:44.138610 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:34:44.142027 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 10 00:34:44.144390 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:34:44.144686 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:34:44.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.151504 kernel: audit: type=1130 audit(1752107684.146:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.147567 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:44.158772 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:44.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.165415 kernel: audit: type=1130 audit(1752107684.160:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.165459 kernel: iscsi: registered transport (tcp) Jul 10 00:34:44.193408 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:34:44.193484 kernel: QLogic iSCSI HBA Driver Jul 10 00:34:44.237773 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:34:44.243874 kernel: audit: type=1130 audit(1752107684.238:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.239876 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:34:44.286403 kernel: raid6: avx2x4 gen() 30145 MB/s Jul 10 00:34:44.303400 kernel: raid6: avx2x4 xor() 8026 MB/s Jul 10 00:34:44.320416 kernel: raid6: avx2x2 gen() 32181 MB/s Jul 10 00:34:44.337427 kernel: raid6: avx2x2 xor() 19078 MB/s Jul 10 00:34:44.354407 kernel: raid6: avx2x1 gen() 26162 MB/s Jul 10 00:34:44.371398 kernel: raid6: avx2x1 xor() 15182 MB/s Jul 10 00:34:44.388394 kernel: raid6: sse2x4 gen() 14446 MB/s Jul 10 00:34:44.405393 kernel: raid6: sse2x4 xor() 7589 MB/s Jul 10 00:34:44.422394 kernel: raid6: sse2x2 gen() 15988 MB/s Jul 10 00:34:44.439402 kernel: raid6: sse2x2 xor() 9512 MB/s Jul 10 00:34:44.456398 kernel: raid6: sse2x1 gen() 11670 MB/s Jul 10 00:34:44.473916 kernel: raid6: sse2x1 xor() 7392 MB/s Jul 10 00:34:44.473957 kernel: raid6: using algorithm avx2x2 gen() 32181 MB/s Jul 10 00:34:44.473968 kernel: raid6: .... xor() 19078 MB/s, rmw enabled Jul 10 00:34:44.474732 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:34:44.488404 kernel: xor: automatically using best checksumming function avx Jul 10 00:34:44.582480 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 10 00:34:44.591146 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:34:44.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.592000 audit: BPF prog-id=7 op=LOAD Jul 10 00:34:44.592000 audit: BPF prog-id=8 op=LOAD Jul 10 00:34:44.593608 systemd[1]: Starting systemd-udevd.service... Jul 10 00:34:44.606573 systemd-udevd[399]: Using default interface naming scheme 'v252'. Jul 10 00:34:44.610636 systemd[1]: Started systemd-udevd.service. Jul 10 00:34:44.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.612189 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:34:44.624107 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jul 10 00:34:44.650271 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:34:44.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.654264 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:34:44.693626 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:34:44.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:44.725180 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:34:44.730832 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:34:44.730847 kernel: GPT:9289727 != 19775487 Jul 10 00:34:44.730855 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:34:44.730865 kernel: GPT:9289727 != 19775487 Jul 10 00:34:44.730874 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:34:44.730883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:44.731789 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:34:44.739388 kernel: libata version 3.00 loaded. Jul 10 00:34:44.742686 kernel: AVX2 version of gcm_enc/dec engaged. Jul 10 00:34:44.742709 kernel: AES CTR mode by8 optimization enabled Jul 10 00:34:44.747621 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:34:44.760652 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:34:44.760672 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 10 00:34:44.760788 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:34:44.760881 kernel: scsi host0: ahci Jul 10 00:34:44.761024 kernel: scsi host1: ahci Jul 10 00:34:44.761124 kernel: scsi host2: ahci Jul 10 00:34:44.761248 kernel: scsi host3: ahci Jul 10 00:34:44.761411 kernel: scsi host4: ahci Jul 10 00:34:44.761568 kernel: scsi host5: ahci Jul 10 00:34:44.761670 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 10 00:34:44.761684 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 10 00:34:44.761693 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 10 00:34:44.761702 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 10 00:34:44.761711 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 10 00:34:44.761719 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 10 00:34:44.778860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:34:44.783394 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (438) Jul 10 00:34:44.783622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:34:44.794383 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:34:44.800829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:34:44.804936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:34:44.806611 systemd[1]: Starting disk-uuid.service... Jul 10 00:34:44.813505 disk-uuid[519]: Primary Header is updated. Jul 10 00:34:44.813505 disk-uuid[519]: Secondary Entries is updated. Jul 10 00:34:44.813505 disk-uuid[519]: Secondary Header is updated. Jul 10 00:34:44.818412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:44.822407 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:45.072004 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:45.072087 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:45.072098 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:45.073402 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:34:45.074394 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:45.075408 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:45.076411 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:34:45.077805 kernel: ata3.00: applying bridge limits Jul 10 00:34:45.077824 kernel: ata3.00: configured for UDMA/100 Jul 10 00:34:45.078403 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:34:45.113414 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:34:45.131139 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:34:45.131152 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:34:45.832417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:45.832553 disk-uuid[520]: The operation has completed successfully. Jul 10 00:34:45.865655 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:34:45.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:45.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:45.865773 systemd[1]: Finished disk-uuid.service. Jul 10 00:34:45.867915 systemd[1]: Starting verity-setup.service... Jul 10 00:34:45.881401 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 10 00:34:45.902353 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:34:45.904581 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:34:45.906699 systemd[1]: Finished verity-setup.service. Jul 10 00:34:45.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:45.992398 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:34:45.992731 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:34:45.993298 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:34:45.994258 systemd[1]: Starting ignition-setup.service... Jul 10 00:34:45.997074 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:34:46.007954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:34:46.007996 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:34:46.008012 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:34:46.015709 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:34:46.058924 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:34:46.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.060000 audit: BPF prog-id=9 op=LOAD Jul 10 00:34:46.061465 systemd[1]: Starting systemd-networkd.service... Jul 10 00:34:46.090862 systemd-networkd[707]: lo: Link UP Jul 10 00:34:46.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.090870 systemd-networkd[707]: lo: Gained carrier Jul 10 00:34:46.091426 systemd-networkd[707]: Enumeration completed Jul 10 00:34:46.091504 systemd[1]: Started systemd-networkd.service. Jul 10 00:34:46.091657 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:34:46.095913 systemd-networkd[707]: eth0: Link UP Jul 10 00:34:46.095916 systemd-networkd[707]: eth0: Gained carrier Jul 10 00:34:46.096116 systemd[1]: Reached target network.target. Jul 10 00:34:46.097693 systemd[1]: Starting iscsiuio.service... Jul 10 00:34:46.111460 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:34:46.147394 systemd[1]: Started iscsiuio.service. Jul 10 00:34:46.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.148871 systemd[1]: Starting iscsid.service... Jul 10 00:34:46.152759 iscsid[712]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:34:46.152759 iscsid[712]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:34:46.152759 iscsid[712]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:34:46.152759 iscsid[712]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:34:46.152759 iscsid[712]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:34:46.152759 iscsid[712]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:34:46.155943 systemd[1]: Started iscsid.service. Jul 10 00:34:46.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.165447 systemd[1]: Finished ignition-setup.service. Jul 10 00:34:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.171571 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:34:46.173714 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:34:46.181137 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:34:46.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.181751 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:34:46.183403 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:34:46.184883 systemd[1]: Reached target remote-fs.target. Jul 10 00:34:46.187555 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:34:46.197051 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:34:46.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.294949 ignition[715]: Ignition 2.14.0 Jul 10 00:34:46.294965 ignition[715]: Stage: fetch-offline Jul 10 00:34:46.295062 ignition[715]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:46.295076 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:46.295260 ignition[715]: parsed url from cmdline: "" Jul 10 00:34:46.295264 ignition[715]: no config URL provided Jul 10 00:34:46.295271 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:34:46.295281 ignition[715]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:34:46.295305 ignition[715]: op(1): [started] loading QEMU firmware config module Jul 10 00:34:46.295327 ignition[715]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:34:46.306563 ignition[715]: op(1): [finished] loading QEMU firmware config module Jul 10 00:34:46.353014 ignition[715]: parsing config with SHA512: bf9f495f18f781617c378c2cfb0a3d359646010ef8d7501a71a3bf7f0216820ecd23d167c2194f835cb21999015af5a710fa016d567d902792b05dca46d4033f Jul 10 00:34:46.368782 unknown[715]: fetched base config from "system" Jul 10 00:34:46.368794 unknown[715]: fetched user config from "qemu" Jul 10 00:34:46.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.369338 ignition[715]: fetch-offline: fetch-offline passed Jul 10 00:34:46.370544 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:34:46.369410 ignition[715]: Ignition finished successfully Jul 10 00:34:46.376172 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:34:46.377151 systemd[1]: Starting ignition-kargs.service... Jul 10 00:34:46.388981 ignition[735]: Ignition 2.14.0 Jul 10 00:34:46.388993 ignition[735]: Stage: kargs Jul 10 00:34:46.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.395061 systemd[1]: Finished ignition-kargs.service. Jul 10 00:34:46.389115 ignition[735]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:46.389126 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:46.397659 systemd[1]: Starting ignition-disks.service... Jul 10 00:34:46.390467 ignition[735]: kargs: kargs passed Jul 10 00:34:46.390513 ignition[735]: Ignition finished successfully Jul 10 00:34:46.414583 ignition[741]: Ignition 2.14.0 Jul 10 00:34:46.414596 ignition[741]: Stage: disks Jul 10 00:34:46.414745 ignition[741]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:46.414758 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:46.417533 systemd[1]: Finished ignition-disks.service. Jul 10 00:34:46.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.416520 ignition[741]: disks: disks passed Jul 10 00:34:46.419674 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:34:46.416570 ignition[741]: Ignition finished successfully Jul 10 00:34:46.432242 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:34:46.433194 systemd[1]: Reached target local-fs.target. Jul 10 00:34:46.434780 systemd[1]: Reached target sysinit.target. Jul 10 00:34:46.435247 systemd[1]: Reached target basic.target. Jul 10 00:34:46.436753 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:34:46.485610 systemd-fsck[749]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 10 00:34:46.495872 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:34:46.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.499517 systemd[1]: Mounting sysroot.mount... Jul 10 00:34:46.510204 systemd[1]: Mounted sysroot.mount. Jul 10 00:34:46.512584 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:34:46.511076 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:34:46.513571 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:34:46.515441 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:34:46.515488 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:34:46.515512 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:34:46.518810 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:34:46.521108 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:34:46.526489 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:34:46.532195 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:34:46.537475 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:34:46.542107 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:34:46.579789 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:34:46.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.581708 systemd[1]: Starting ignition-mount.service... Jul 10 00:34:46.583030 systemd[1]: Starting sysroot-boot.service... Jul 10 00:34:46.592079 bash[800]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:34:46.613843 systemd[1]: Finished sysroot-boot.service. Jul 10 00:34:46.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.617413 ignition[802]: INFO : Ignition 2.14.0 Jul 10 00:34:46.617413 ignition[802]: INFO : Stage: mount Jul 10 00:34:46.619206 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:46.619206 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:46.622253 ignition[802]: INFO : mount: mount passed Jul 10 00:34:46.623082 ignition[802]: INFO : Ignition finished successfully Jul 10 00:34:46.624752 systemd[1]: Finished ignition-mount.service. Jul 10 00:34:46.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:46.915608 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:34:46.924413 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 10 00:34:46.924485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:34:46.926131 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:34:46.926146 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:34:46.931429 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:34:46.933134 systemd[1]: Starting ignition-files.service... Jul 10 00:34:47.003994 ignition[830]: INFO : Ignition 2.14.0 Jul 10 00:34:47.003994 ignition[830]: INFO : Stage: files Jul 10 00:34:47.006140 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:47.006140 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:47.008337 ignition[830]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:34:47.010103 ignition[830]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:34:47.010103 ignition[830]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:34:47.013625 ignition[830]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:34:47.015160 ignition[830]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:34:47.016763 ignition[830]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:34:47.016702 unknown[830]: wrote ssh authorized keys file for user: core Jul 10 00:34:47.019910 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:34:47.019910 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 10 00:34:47.096701 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:34:47.344616 systemd-networkd[707]: eth0: Gained IPv6LL Jul 10 00:34:47.448289 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:34:47.450678 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:34:47.450678 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:34:47.909790 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:34:48.049000 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:34:48.049000 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:34:48.053030 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 10 00:34:48.529638 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:34:49.000142 ignition[830]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:34:49.000142 ignition[830]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:34:49.003579 ignition[830]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:34:49.005480 ignition[830]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:34:49.005480 ignition[830]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:34:49.008447 ignition[830]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:34:49.008447 ignition[830]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:34:49.011459 ignition[830]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:34:49.011459 ignition[830]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:34:49.011459 ignition[830]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:34:49.015744 ignition[830]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:34:49.015744 ignition[830]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:34:49.015744 ignition[830]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:34:49.046799 ignition[830]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:34:49.048440 ignition[830]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:34:49.048440 ignition[830]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:34:49.048440 ignition[830]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:34:49.048440 ignition[830]: INFO : files: files passed Jul 10 00:34:49.048440 ignition[830]: INFO : Ignition finished successfully Jul 10 00:34:49.061272 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 10 00:34:49.061306 kernel: audit: type=1130 audit(1752107689.050:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.049762 systemd[1]: Finished ignition-files.service. Jul 10 00:34:49.052320 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:34:49.055857 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:34:49.071918 kernel: audit: type=1130 audit(1752107689.063:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.071934 kernel: audit: type=1131 audit(1752107689.063:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.059475 systemd[1]: Starting ignition-quench.service... Jul 10 00:34:49.063324 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:34:49.063463 systemd[1]: Finished ignition-quench.service. Jul 10 00:34:49.074639 initrd-setup-root-after-ignition[855]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:34:49.076345 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:34:49.077067 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:34:49.082851 kernel: audit: type=1130 audit(1752107689.077:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.078716 systemd[1]: Reached target ignition-complete.target. Jul 10 00:34:49.084086 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:34:49.098015 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:34:49.098129 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:34:49.106535 kernel: audit: type=1130 audit(1752107689.099:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.106556 kernel: audit: type=1131 audit(1752107689.099:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.100016 systemd[1]: Reached target initrd-fs.target. Jul 10 00:34:49.106901 systemd[1]: Reached target initrd.target. Jul 10 00:34:49.108184 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:34:49.109431 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:34:49.122273 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:34:49.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.124888 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:34:49.128060 kernel: audit: type=1130 audit(1752107689.123:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.136493 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:34:49.136908 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:34:49.138542 systemd[1]: Stopped target timers.target. Jul 10 00:34:49.140476 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:34:49.145983 kernel: audit: type=1131 audit(1752107689.141:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.140620 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:34:49.142120 systemd[1]: Stopped target initrd.target. Jul 10 00:34:49.147674 systemd[1]: Stopped target basic.target. Jul 10 00:34:49.148279 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:34:49.148829 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:34:49.151750 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:34:49.153693 systemd[1]: Stopped target remote-fs.target. Jul 10 00:34:49.155720 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:34:49.157132 systemd[1]: Stopped target sysinit.target. Jul 10 00:34:49.158580 systemd[1]: Stopped target local-fs.target. Jul 10 00:34:49.158925 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:34:49.161856 systemd[1]: Stopped target swap.target. Jul 10 00:34:49.162842 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:34:49.168397 kernel: audit: type=1131 audit(1752107689.164:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.162956 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:34:49.164523 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:34:49.174405 kernel: audit: type=1131 audit(1752107689.169:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.168725 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:34:49.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.168856 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:34:49.170425 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:34:49.170542 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:34:49.174934 systemd[1]: Stopped target paths.target. Jul 10 00:34:49.175189 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:34:49.178420 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:34:49.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.179115 systemd[1]: Stopped target slices.target. Jul 10 00:34:49.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.181068 systemd[1]: Stopped target sockets.target. Jul 10 00:34:49.188725 iscsid[712]: iscsid shutting down. Jul 10 00:34:49.182352 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:34:49.182461 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:34:49.184415 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:34:49.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.184495 systemd[1]: Stopped ignition-files.service. Jul 10 00:34:49.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.198164 ignition[871]: INFO : Ignition 2.14.0 Jul 10 00:34:49.198164 ignition[871]: INFO : Stage: umount Jul 10 00:34:49.198164 ignition[871]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:49.198164 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:49.198164 ignition[871]: INFO : umount: umount passed Jul 10 00:34:49.198164 ignition[871]: INFO : Ignition finished successfully Jul 10 00:34:49.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.185915 systemd[1]: Stopping ignition-mount.service... Jul 10 00:34:49.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.188904 systemd[1]: Stopping iscsid.service... Jul 10 00:34:49.190297 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:34:49.190435 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:34:49.192908 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:34:49.193678 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:34:49.193798 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:34:49.196557 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:34:49.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.196697 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:34:49.200215 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:34:49.200340 systemd[1]: Stopped iscsid.service. Jul 10 00:34:49.202561 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:34:49.202666 systemd[1]: Stopped ignition-mount.service. Jul 10 00:34:49.204738 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:34:49.204848 systemd[1]: Closed iscsid.socket. Jul 10 00:34:49.206003 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:34:49.206058 systemd[1]: Stopped ignition-disks.service. Jul 10 00:34:49.207936 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:34:49.207982 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:34:49.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.208952 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:34:49.209000 systemd[1]: Stopped ignition-setup.service. Jul 10 00:34:49.213521 systemd[1]: Stopping iscsiuio.service... Jul 10 00:34:49.218865 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:34:49.219382 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:34:49.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.219478 systemd[1]: Stopped iscsiuio.service. Jul 10 00:34:49.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.220527 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:34:49.220605 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:34:49.222957 systemd[1]: Stopped target network.target. Jul 10 00:34:49.224603 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:34:49.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.224637 systemd[1]: Closed iscsiuio.socket. Jul 10 00:34:49.226000 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:34:49.227693 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:34:49.231478 systemd-networkd[707]: eth0: DHCPv6 lease lost Jul 10 00:34:49.285000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:34:49.286000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:34:49.232846 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:34:49.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.232984 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:34:49.235670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:34:49.235735 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:34:49.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.238121 systemd[1]: Stopping network-cleanup.service... Jul 10 00:34:49.239411 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:34:49.239465 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:34:49.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.273298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:34:49.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.273341 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:34:49.274933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:34:49.274966 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:34:49.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.276557 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:34:49.280981 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:34:49.281426 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:34:49.281522 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:34:49.286649 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:34:49.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.286792 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:34:49.289485 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:34:49.289559 systemd[1]: Stopped network-cleanup.service. Jul 10 00:34:49.291054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:34:49.291083 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:34:49.292520 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:34:49.292547 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:34:49.293966 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:34:49.294001 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:34:49.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.295688 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:34:49.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.295719 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:34:49.297229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:34:49.297263 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:34:49.299478 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:34:49.300388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:34:49.300457 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:34:49.306950 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:34:49.307022 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:34:49.316480 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:34:49.316557 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:34:49.317362 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:34:49.318886 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:34:49.319005 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:34:49.320230 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:34:49.339231 systemd[1]: Switching root. Jul 10 00:34:49.359486 systemd-journald[198]: Journal stopped Jul 10 00:34:52.716748 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 10 00:34:52.716814 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:34:52.716827 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:34:52.716838 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:34:52.716854 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:34:52.716866 kernel: SELinux: policy capability open_perms=1 Jul 10 00:34:52.716876 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:34:52.716885 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:34:52.716898 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:34:52.716908 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:34:52.716921 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:34:52.716930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:34:52.716944 systemd[1]: Successfully loaded SELinux policy in 42.642ms. Jul 10 00:34:52.716966 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.190ms. Jul 10 00:34:52.716979 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:34:52.716990 systemd[1]: Detected virtualization kvm. Jul 10 00:34:52.717000 systemd[1]: Detected architecture x86-64. Jul 10 00:34:52.717011 systemd[1]: Detected first boot. Jul 10 00:34:52.717022 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:34:52.717034 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:34:52.717044 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:34:52.717054 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:52.717066 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:52.717078 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:52.717089 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:34:52.717099 systemd[1]: Stopped initrd-switch-root.service. Jul 10 00:34:52.717111 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:34:52.717121 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:34:52.717132 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:34:52.717143 systemd[1]: Created slice system-getty.slice. Jul 10 00:34:52.717153 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:34:52.717164 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:34:52.717174 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:34:52.717185 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:34:52.717203 systemd[1]: Created slice user.slice. Jul 10 00:34:52.717217 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:34:52.717228 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:34:52.717238 systemd[1]: Set up automount boot.automount. Jul 10 00:34:52.717249 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:34:52.717259 systemd[1]: Stopped target initrd-switch-root.target. Jul 10 00:34:52.717270 systemd[1]: Stopped target initrd-fs.target. Jul 10 00:34:52.717280 systemd[1]: Stopped target initrd-root-fs.target. Jul 10 00:34:52.717292 systemd[1]: Reached target integritysetup.target. Jul 10 00:34:52.717302 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:34:52.717313 systemd[1]: Reached target remote-fs.target. Jul 10 00:34:52.717323 systemd[1]: Reached target slices.target. Jul 10 00:34:52.717333 systemd[1]: Reached target swap.target. Jul 10 00:34:52.717344 systemd[1]: Reached target torcx.target. Jul 10 00:34:52.717354 systemd[1]: Reached target veritysetup.target. Jul 10 00:34:52.717364 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:34:52.717391 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:34:52.717401 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:34:52.717414 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:34:52.717424 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:34:52.717434 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:34:52.717452 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:34:52.717464 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:34:52.717475 systemd[1]: Mounting media.mount... Jul 10 00:34:52.717486 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:52.717497 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:34:52.717507 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:34:52.717519 systemd[1]: Mounting tmp.mount... Jul 10 00:34:52.717530 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:34:52.717541 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:52.717554 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:34:52.717565 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:34:52.717575 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:52.717585 systemd[1]: Starting modprobe@drm.service... Jul 10 00:34:52.717596 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:52.718348 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:34:52.718365 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:52.718390 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:34:52.718401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:34:52.718411 systemd[1]: Stopped systemd-fsck-root.service. Jul 10 00:34:52.718422 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:34:52.718432 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:34:52.718452 kernel: loop: module loaded Jul 10 00:34:52.718461 systemd[1]: Stopped systemd-journald.service. Jul 10 00:34:52.718472 systemd[1]: Starting systemd-journald.service... Jul 10 00:34:52.718484 kernel: fuse: init (API version 7.34) Jul 10 00:34:52.718494 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:34:52.718505 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:34:52.718514 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:34:52.718525 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:34:52.718535 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:34:52.718545 systemd[1]: Stopped verity-setup.service. Jul 10 00:34:52.718555 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:52.718566 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:34:52.718578 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:34:52.718588 systemd[1]: Mounted media.mount. Jul 10 00:34:52.718600 systemd-journald[982]: Journal started Jul 10 00:34:52.718639 systemd-journald[982]: Runtime Journal (/run/log/journal/0b24c5252eb24828b2ec83f489c3559e) is 6.0M, max 48.4M, 42.4M free. Jul 10 00:34:49.429000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:34:49.683000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:34:49.683000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:34:49.683000 audit: BPF prog-id=10 op=LOAD Jul 10 00:34:49.683000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:34:49.683000 audit: BPF prog-id=11 op=LOAD Jul 10 00:34:49.683000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:34:49.719000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:34:49.719000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:49.719000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:34:49.720000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 10 00:34:49.720000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a5 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:49.720000 audit: CWD cwd="/" Jul 10 00:34:49.720000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:49.720000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:49.720000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:34:52.568000 audit: BPF prog-id=12 op=LOAD Jul 10 00:34:52.568000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:34:52.568000 audit: BPF prog-id=13 op=LOAD Jul 10 00:34:52.568000 audit: BPF prog-id=14 op=LOAD Jul 10 00:34:52.569000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:34:52.569000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:34:52.569000 audit: BPF prog-id=15 op=LOAD Jul 10 00:34:52.569000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:34:52.569000 audit: BPF prog-id=16 op=LOAD Jul 10 00:34:52.569000 audit: BPF prog-id=17 op=LOAD Jul 10 00:34:52.569000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:34:52.569000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:34:52.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.576000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:34:52.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.688000 audit: BPF prog-id=18 op=LOAD Jul 10 00:34:52.688000 audit: BPF prog-id=19 op=LOAD Jul 10 00:34:52.688000 audit: BPF prog-id=20 op=LOAD Jul 10 00:34:52.688000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:34:52.688000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:34:52.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.714000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:34:52.714000 audit[982]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc674239a0 a2=4000 a3=7ffc67423a3c items=0 ppid=1 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:52.714000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:34:52.566560 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:34:49.717723 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:52.566574 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:34:49.717991 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:34:52.570851 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:34:49.718013 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:34:49.718042 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 10 00:34:49.718051 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 10 00:34:49.718079 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 10 00:34:52.721923 systemd[1]: Started systemd-journald.service. Jul 10 00:34:52.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.718091 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 10 00:34:49.718300 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 10 00:34:49.718337 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:34:49.718349 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:34:49.718983 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 10 00:34:49.719031 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 10 00:34:49.719049 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 10 00:34:52.722485 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:34:49.719062 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 10 00:34:49.719078 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 10 00:34:49.719090 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 10 00:34:52.723587 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:34:52.276910 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:52.277222 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:52.724631 systemd[1]: Mounted tmp.mount. Jul 10 00:34:52.277350 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:52.277566 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:34:52.277615 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 10 00:34:52.277691 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2025-07-10T00:34:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 10 00:34:52.725801 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:34:52.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.727040 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:34:52.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.728240 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:34:52.728453 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:34:52.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.729653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:52.729844 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:52.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.731867 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:34:52.732133 systemd[1]: Finished modprobe@drm.service. Jul 10 00:34:52.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.733328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:52.733530 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:52.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.734939 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:34:52.735132 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:34:52.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.736271 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:52.736416 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:52.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.737546 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:34:52.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.738767 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:34:52.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.739992 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:34:52.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.741415 systemd[1]: Reached target network-pre.target. Jul 10 00:34:52.743615 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:34:52.745747 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:34:52.746580 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:34:52.749406 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:34:52.751572 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:34:52.752539 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:52.753894 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:34:52.754818 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:52.765550 systemd-journald[982]: Time spent on flushing to /var/log/journal/0b24c5252eb24828b2ec83f489c3559e is 15.786ms for 1162 entries. Jul 10 00:34:52.765550 systemd-journald[982]: System Journal (/var/log/journal/0b24c5252eb24828b2ec83f489c3559e) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:34:52.804888 systemd-journald[982]: Received client request to flush runtime journal. Jul 10 00:34:52.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:52.756275 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:52.759636 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:34:52.763790 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:34:52.805857 udevadm[1008]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:34:52.770204 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:34:52.771281 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:34:52.773725 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:34:52.779436 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:34:52.782804 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:34:52.784652 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:52.786804 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:34:52.805927 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:34:52.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.324718 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:34:53.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.326000 audit: BPF prog-id=21 op=LOAD Jul 10 00:34:53.326000 audit: BPF prog-id=22 op=LOAD Jul 10 00:34:53.326000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:34:53.326000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:34:53.327332 systemd[1]: Starting systemd-udevd.service... Jul 10 00:34:53.344265 systemd-udevd[1011]: Using default interface naming scheme 'v252'. Jul 10 00:34:53.359253 systemd[1]: Started systemd-udevd.service. Jul 10 00:34:53.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.360000 audit: BPF prog-id=23 op=LOAD Jul 10 00:34:53.361816 systemd[1]: Starting systemd-networkd.service... Jul 10 00:34:53.367000 audit: BPF prog-id=24 op=LOAD Jul 10 00:34:53.367000 audit: BPF prog-id=25 op=LOAD Jul 10 00:34:53.367000 audit: BPF prog-id=26 op=LOAD Jul 10 00:34:53.368619 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:34:53.395439 systemd[1]: Started systemd-userdbd.service. Jul 10 00:34:53.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.409125 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 10 00:34:53.416192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:34:53.450407 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 10 00:34:53.455454 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:34:53.460053 systemd-networkd[1017]: lo: Link UP Jul 10 00:34:53.460466 systemd-networkd[1017]: lo: Gained carrier Jul 10 00:34:53.460969 systemd-networkd[1017]: Enumeration completed Jul 10 00:34:53.461166 systemd-networkd[1017]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:34:53.461211 systemd[1]: Started systemd-networkd.service. Jul 10 00:34:53.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.462914 systemd-networkd[1017]: eth0: Link UP Jul 10 00:34:53.462994 systemd-networkd[1017]: eth0: Gained carrier Jul 10 00:34:53.475597 systemd-networkd[1017]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:34:53.474000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 00:34:53.474000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e811986d50 a1=338ac a2=7f984d8ddbc5 a3=5 items=110 ppid=1011 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:53.474000 audit: CWD cwd="/" Jul 10 00:34:53.474000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=1 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=2 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=3 name=(null) inode=15368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=4 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=5 name=(null) inode=15369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=6 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=7 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=8 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=9 name=(null) inode=15371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=10 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=11 name=(null) inode=15372 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=12 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=13 name=(null) inode=15373 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=14 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=15 name=(null) inode=15374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=16 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=17 name=(null) inode=15375 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=18 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=19 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=20 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=21 name=(null) inode=15377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=22 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=23 name=(null) inode=15378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=24 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=25 name=(null) inode=15379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=26 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=27 name=(null) inode=15380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=28 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=29 name=(null) inode=15381 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=30 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=31 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=32 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=33 name=(null) inode=15383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=34 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=35 name=(null) inode=15384 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=36 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=37 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=38 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=39 name=(null) inode=15386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=40 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=41 name=(null) inode=15387 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=42 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=43 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=44 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=45 name=(null) inode=15389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=46 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=47 name=(null) inode=15390 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=48 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=49 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=50 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=51 name=(null) inode=15392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=52 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=53 name=(null) inode=15393 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=55 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=56 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=57 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=58 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=59 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=60 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.489486 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:34:53.474000 audit: PATH item=61 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=62 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=63 name=(null) inode=15398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=64 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=65 name=(null) inode=15399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=66 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=67 name=(null) inode=15400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=68 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=69 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=70 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=71 name=(null) inode=15402 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=72 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=73 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=74 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=75 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=76 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=77 name=(null) inode=15405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=78 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=79 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=80 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=81 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=82 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=83 name=(null) inode=15408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=84 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=85 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=86 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=87 name=(null) inode=15410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=88 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=89 name=(null) inode=15411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=90 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=91 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=92 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=93 name=(null) inode=15413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=94 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=95 name=(null) inode=15414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=96 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=97 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=98 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=99 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=100 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=101 name=(null) inode=15417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=102 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=103 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=104 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=105 name=(null) inode=15419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=106 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=107 name=(null) inode=15420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PATH item=109 name=(null) inode=15421 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:53.474000 audit: PROCTITLE proctitle="(udev-worker)" Jul 10 00:34:53.509620 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 00:34:53.514410 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:34:53.514737 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 10 00:34:53.514896 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:34:53.515015 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:34:53.571589 kernel: kvm: Nested Virtualization enabled Jul 10 00:34:53.571749 kernel: SVM: kvm: Nested Paging enabled Jul 10 00:34:53.571768 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 10 00:34:53.572753 kernel: SVM: Virtual GIF supported Jul 10 00:34:53.590415 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:34:53.619935 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:34:53.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.622455 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:34:53.631782 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:34:53.661614 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:34:53.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.662792 systemd[1]: Reached target cryptsetup.target. Jul 10 00:34:53.665109 systemd[1]: Starting lvm2-activation.service... Jul 10 00:34:53.669671 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:34:53.695425 systemd[1]: Finished lvm2-activation.service. Jul 10 00:34:53.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.696451 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:34:53.697291 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:34:53.697319 systemd[1]: Reached target local-fs.target. Jul 10 00:34:53.698102 systemd[1]: Reached target machines.target. Jul 10 00:34:53.700174 systemd[1]: Starting ldconfig.service... Jul 10 00:34:53.701159 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:53.701470 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:53.702587 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:34:53.704860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:34:53.707302 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:34:53.709323 systemd[1]: Starting systemd-sysext.service... Jul 10 00:34:53.710532 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1050 (bootctl) Jul 10 00:34:53.711923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:34:53.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.716066 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:34:53.724134 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:34:53.728717 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:34:53.728913 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:34:53.742412 kernel: loop0: detected capacity change from 0 to 229808 Jul 10 00:34:53.749506 systemd-fsck[1058]: fsck.fat 4.2 (2021-01-31) Jul 10 00:34:53.749506 systemd-fsck[1058]: /dev/vda1: 791 files, 120751/258078 clusters Jul 10 00:34:53.750981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:34:53.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.754005 systemd[1]: Mounting boot.mount... Jul 10 00:34:53.773867 systemd[1]: Mounted boot.mount. Jul 10 00:34:53.785182 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:34:53.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.326466 ldconfig[1049]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:34:54.329403 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:34:54.360400 kernel: loop1: detected capacity change from 0 to 229808 Jul 10 00:34:54.366212 (sd-sysext)[1064]: Using extensions 'kubernetes'. Jul 10 00:34:54.366783 (sd-sysext)[1064]: Merged extensions into '/usr'. Jul 10 00:34:54.381179 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:54.382765 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:34:54.383793 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.385271 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:54.387290 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:54.389050 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:54.389927 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.390076 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:54.390223 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:54.393388 systemd[1]: Finished ldconfig.service. Jul 10 00:34:54.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.394534 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:34:54.395086 kernel: kauditd_printk_skb: 231 callbacks suppressed Jul 10 00:34:54.395133 kernel: audit: type=1130 audit(1752107694.393:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.399750 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:54.399920 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:54.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.401549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:54.401696 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:54.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.404447 kernel: audit: type=1130 audit(1752107694.400:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.404489 kernel: audit: type=1131 audit(1752107694.400:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.410555 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:54.410724 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:54.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.413397 kernel: audit: type=1130 audit(1752107694.409:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.413454 kernel: audit: type=1131 audit(1752107694.409:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.419759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:34:54.420690 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:34:54.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.421438 kernel: audit: type=1130 audit(1752107694.417:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.421485 kernel: audit: type=1131 audit(1752107694.417:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.426252 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:54.426407 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.429307 systemd[1]: Finished systemd-sysext.service. Jul 10 00:34:54.430369 kernel: audit: type=1130 audit(1752107694.425:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.430433 kernel: audit: type=1130 audit(1752107694.429:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.431948 systemd[1]: Starting ensure-sysext.service... Jul 10 00:34:54.435970 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:34:54.439733 systemd[1]: Reloading. Jul 10 00:34:54.453273 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:34:54.456060 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:34:54.459847 systemd-tmpfiles[1072]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:34:54.492039 /usr/lib/systemd/system-generators/torcx-generator[1092]: time="2025-07-10T00:34:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:54.492472 /usr/lib/systemd/system-generators/torcx-generator[1092]: time="2025-07-10T00:34:54Z" level=info msg="torcx already run" Jul 10 00:34:54.569189 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:54.569213 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:54.588509 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:54.646000 audit: BPF prog-id=27 op=LOAD Jul 10 00:34:54.646000 audit: BPF prog-id=24 op=UNLOAD Jul 10 00:34:54.647000 audit: BPF prog-id=28 op=LOAD Jul 10 00:34:54.647000 audit: BPF prog-id=29 op=LOAD Jul 10 00:34:54.647000 audit: BPF prog-id=25 op=UNLOAD Jul 10 00:34:54.647000 audit: BPF prog-id=26 op=UNLOAD Jul 10 00:34:54.648409 kernel: audit: type=1334 audit(1752107694.646:163): prog-id=27 op=LOAD Jul 10 00:34:54.650000 audit: BPF prog-id=30 op=LOAD Jul 10 00:34:54.650000 audit: BPF prog-id=31 op=LOAD Jul 10 00:34:54.650000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:34:54.650000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:34:54.651000 audit: BPF prog-id=32 op=LOAD Jul 10 00:34:54.651000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:34:54.651000 audit: BPF prog-id=33 op=LOAD Jul 10 00:34:54.652000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:34:54.652000 audit: BPF prog-id=34 op=LOAD Jul 10 00:34:54.652000 audit: BPF prog-id=35 op=LOAD Jul 10 00:34:54.652000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:34:54.652000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:34:54.656179 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:34:54.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.661395 systemd[1]: Starting audit-rules.service... Jul 10 00:34:54.663232 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:34:54.665334 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:34:54.666000 audit: BPF prog-id=36 op=LOAD Jul 10 00:34:54.667956 systemd[1]: Starting systemd-resolved.service... Jul 10 00:34:54.669000 audit: BPF prog-id=37 op=LOAD Jul 10 00:34:54.670454 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:34:54.672245 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:34:54.673642 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:34:54.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.678000 audit[1141]: SYSTEM_BOOT pid=1141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.676788 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:54.682765 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.684398 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:54.686742 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:54.688877 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:54.689830 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.689999 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:54.690141 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:54.691825 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:34:54.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.693446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:54.693571 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:54.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.695045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:54.695151 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:54.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.696772 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:54.696878 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:54.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.699602 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:34:54.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.705610 systemd[1]: Finished ensure-sysext.service. Jul 10 00:34:54.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:54.707908 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.709393 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:54.710000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:34:54.710000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc74621c70 a2=420 a3=0 items=0 ppid=1133 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:54.710000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:34:54.711174 augenrules[1157]: No rules Jul 10 00:34:54.711758 systemd[1]: Starting modprobe@drm.service... Jul 10 00:34:54.713973 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:54.715976 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:54.716991 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.717065 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:54.718387 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:34:54.720846 systemd[1]: Starting systemd-update-done.service... Jul 10 00:34:54.721790 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:54.722376 systemd[1]: Finished audit-rules.service. Jul 10 00:34:54.723360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:54.723501 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:54.724531 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:34:54.724638 systemd[1]: Finished modprobe@drm.service. Jul 10 00:34:54.725713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:54.725843 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:54.727034 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:54.727137 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:54.730277 systemd[1]: Finished systemd-update-done.service. Jul 10 00:34:54.731752 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:54.731786 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:54.731818 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.731828 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:54.732923 systemd-resolved[1136]: Positive Trust Anchors: Jul 10 00:34:54.732934 systemd-resolved[1136]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:34:54.732961 systemd-resolved[1136]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:34:54.740991 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:34:54.741338 systemd-resolved[1136]: Defaulting to hostname 'linux'. Jul 10 00:34:54.742384 systemd-timesyncd[1138]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:34:54.742415 systemd[1]: Reached target time-set.target. Jul 10 00:34:54.742428 systemd-timesyncd[1138]: Initial clock synchronization to Thu 2025-07-10 00:34:55.090979 UTC. Jul 10 00:34:54.743453 systemd[1]: Started systemd-resolved.service. Jul 10 00:34:54.744531 systemd[1]: Reached target network.target. Jul 10 00:34:54.745506 systemd[1]: Reached target nss-lookup.target. Jul 10 00:34:54.746326 systemd[1]: Reached target sysinit.target. Jul 10 00:34:54.747283 systemd[1]: Started motdgen.path. Jul 10 00:34:54.748045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:34:54.749310 systemd[1]: Started logrotate.timer. Jul 10 00:34:54.750143 systemd[1]: Started mdadm.timer. Jul 10 00:34:54.750850 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:34:54.751718 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:34:54.751745 systemd[1]: Reached target paths.target. Jul 10 00:34:54.752503 systemd[1]: Reached target timers.target. Jul 10 00:34:54.753604 systemd[1]: Listening on dbus.socket. Jul 10 00:34:54.755450 systemd[1]: Starting docker.socket... Jul 10 00:34:54.758435 systemd[1]: Listening on sshd.socket. Jul 10 00:34:54.759301 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:54.759677 systemd[1]: Listening on docker.socket. Jul 10 00:34:54.760531 systemd[1]: Reached target sockets.target. Jul 10 00:34:54.761303 systemd[1]: Reached target basic.target. Jul 10 00:34:54.762130 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.762161 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:34:54.763059 systemd[1]: Starting containerd.service... Jul 10 00:34:54.764783 systemd[1]: Starting dbus.service... Jul 10 00:34:54.766411 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:34:54.768244 systemd[1]: Starting extend-filesystems.service... Jul 10 00:34:54.769213 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:34:54.770293 jq[1172]: false Jul 10 00:34:54.770159 systemd[1]: Starting motdgen.service... Jul 10 00:34:54.772223 systemd[1]: Starting prepare-helm.service... Jul 10 00:34:54.775086 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:34:54.777486 systemd[1]: Starting sshd-keygen.service... Jul 10 00:34:54.782003 systemd[1]: Starting systemd-logind.service... Jul 10 00:34:54.783059 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:54.783161 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:34:54.783724 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:34:54.784675 systemd[1]: Starting update-engine.service... Jul 10 00:34:54.785701 extend-filesystems[1173]: Found loop1 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found sr0 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda1 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda2 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda3 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found usr Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda4 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda6 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda7 Jul 10 00:34:54.787820 extend-filesystems[1173]: Found vda9 Jul 10 00:34:54.787820 extend-filesystems[1173]: Checking size of /dev/vda9 Jul 10 00:34:54.787257 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:34:54.791994 jq[1192]: true Jul 10 00:34:54.797011 dbus-daemon[1171]: [system] SELinux support is enabled Jul 10 00:34:54.799307 systemd[1]: Started dbus.service. Jul 10 00:34:54.804093 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:34:54.804341 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:34:54.804769 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:34:54.804952 systemd[1]: Finished motdgen.service. Jul 10 00:34:54.807470 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:34:54.807650 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:34:54.813915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:34:54.813990 systemd[1]: Reached target system-config.target. Jul 10 00:34:54.815563 tar[1195]: linux-amd64/LICENSE Jul 10 00:34:54.815885 jq[1197]: true Jul 10 00:34:54.816127 tar[1195]: linux-amd64/helm Jul 10 00:34:54.876369 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:34:54.876446 systemd[1]: Reached target user-config.target. Jul 10 00:34:54.894905 extend-filesystems[1173]: Resized partition /dev/vda9 Jul 10 00:34:54.939665 update_engine[1189]: I0710 00:34:54.936890 1189 main.cc:92] Flatcar Update Engine starting Jul 10 00:34:54.944959 systemd[1]: Started update-engine.service. Jul 10 00:34:54.945071 update_engine[1189]: I0710 00:34:54.944988 1189 update_check_scheduler.cc:74] Next update check in 6m10s Jul 10 00:34:54.948279 systemd[1]: Started locksmithd.service. Jul 10 00:34:54.953975 extend-filesystems[1206]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:34:54.954450 systemd-logind[1186]: Watching system buttons on /dev/input/event1 (Power Button) Jul 10 00:34:54.954465 systemd-logind[1186]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:34:54.957527 systemd-logind[1186]: New seat seat0. Jul 10 00:34:54.960915 systemd[1]: Started systemd-logind.service. Jul 10 00:34:54.969396 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:34:55.053556 env[1198]: time="2025-07-10T00:34:55.053469376Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:34:55.079830 env[1198]: time="2025-07-10T00:34:55.079730965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:34:55.080340 env[1198]: time="2025-07-10T00:34:55.080292977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:55.082576 env[1198]: time="2025-07-10T00:34:55.082547487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:55.082694 env[1198]: time="2025-07-10T00:34:55.082669272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:55.083273 env[1198]: time="2025-07-10T00:34:55.083227500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:55.083371 env[1198]: time="2025-07-10T00:34:55.083345532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:55.083498 env[1198]: time="2025-07-10T00:34:55.083463051Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:34:55.083596 env[1198]: time="2025-07-10T00:34:55.083570659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:55.083868 env[1198]: time="2025-07-10T00:34:55.083846672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:55.084307 env[1198]: time="2025-07-10T00:34:55.084277751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:55.084533 env[1198]: time="2025-07-10T00:34:55.084511555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:55.084614 env[1198]: time="2025-07-10T00:34:55.084593830Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:34:55.084809 env[1198]: time="2025-07-10T00:34:55.084782081Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:34:55.084953 env[1198]: time="2025-07-10T00:34:55.084909418Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:34:55.088591 systemd-networkd[1017]: eth0: Gained IPv6LL Jul 10 00:34:55.092262 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:34:55.093959 systemd[1]: Reached target network-online.target. Jul 10 00:34:55.221559 systemd[1]: Starting kubelet.service... Jul 10 00:34:55.324462 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:34:55.386671 extend-filesystems[1206]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:34:55.386671 extend-filesystems[1206]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:34:55.386671 extend-filesystems[1206]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:34:55.391862 extend-filesystems[1173]: Resized filesystem in /dev/vda9 Jul 10 00:34:55.391875 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:34:55.393452 systemd[1]: Finished extend-filesystems.service. Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395052161Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395118490Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395158033Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395233836Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395262839Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395365219Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395410156Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395432102Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395450201Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395469700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395487987Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395507799Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395647182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:34:55.398224 env[1198]: time="2025-07-10T00:34:55.395778261Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396302561Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396353290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396389508Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396504194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396527688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396648030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396670091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396719263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396739850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396757655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396775963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396795650Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.396995015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.397019198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.398754 env[1198]: time="2025-07-10T00:34:55.397038206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.399241 env[1198]: time="2025-07-10T00:34:55.397056012Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:34:55.399241 env[1198]: time="2025-07-10T00:34:55.397076369Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:34:55.399241 env[1198]: time="2025-07-10T00:34:55.397092920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:34:55.399241 env[1198]: time="2025-07-10T00:34:55.397131009Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:34:55.399241 env[1198]: time="2025-07-10T00:34:55.397185880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:34:55.399473 env[1198]: time="2025-07-10T00:34:55.397519137Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:34:55.399473 env[1198]: time="2025-07-10T00:34:55.397601296Z" level=info msg="Connect containerd service" Jul 10 00:34:55.399473 env[1198]: time="2025-07-10T00:34:55.397645836Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.400381674Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.400575131Z" level=info msg="Start subscribing containerd event" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.400653223Z" level=info msg="Start recovering state" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.400748525Z" level=info msg="Start event monitor" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.400790765Z" level=info msg="Start snapshots syncer" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.400804179Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.401063903Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.401118877Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:34:55.402257 env[1198]: time="2025-07-10T00:34:55.401726664Z" level=info msg="containerd successfully booted in 0.349087s" Jul 10 00:34:55.401355 systemd[1]: Started containerd.service. Jul 10 00:34:55.402690 bash[1218]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:34:55.405514 env[1198]: time="2025-07-10T00:34:55.400815241Z" level=info msg="Start streaming server" Jul 10 00:34:55.403740 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:34:55.426952 locksmithd[1211]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:34:55.508545 tar[1195]: linux-amd64/README.md Jul 10 00:34:55.514344 systemd[1]: Finished prepare-helm.service. Jul 10 00:34:55.544073 sshd_keygen[1190]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:34:55.568928 systemd[1]: Finished sshd-keygen.service. Jul 10 00:34:55.573101 systemd[1]: Starting issuegen.service... Jul 10 00:34:55.581582 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:34:55.581849 systemd[1]: Finished issuegen.service. Jul 10 00:34:55.585136 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:34:55.593795 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:34:55.596868 systemd[1]: Started getty@tty1.service. Jul 10 00:34:55.617946 systemd[1]: Started serial-getty@ttyS0.service. Jul 10 00:34:55.619379 systemd[1]: Reached target getty.target. Jul 10 00:34:56.469278 systemd[1]: Started kubelet.service. Jul 10 00:34:56.471004 systemd[1]: Reached target multi-user.target. Jul 10 00:34:56.473586 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:34:56.482253 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:34:56.482470 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:34:56.483595 systemd[1]: Startup finished in 830ms (kernel) + 5.515s (initrd) + 7.098s (userspace) = 13.444s. Jul 10 00:34:57.131161 kubelet[1253]: E0710 00:34:57.131092 1253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:34:57.133279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:34:57.133462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:34:57.133761 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Jul 10 00:34:57.204002 systemd[1]: Created slice system-sshd.slice. Jul 10 00:34:57.205095 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:34900.service. Jul 10 00:34:57.242619 sshd[1262]: Accepted publickey for core from 10.0.0.1 port 34900 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:34:57.244283 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:57.254282 systemd-logind[1186]: New session 1 of user core. Jul 10 00:34:57.255472 systemd[1]: Created slice user-500.slice. Jul 10 00:34:57.257139 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:34:57.269529 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:34:57.271329 systemd[1]: Starting user@500.service... Jul 10 00:34:57.274501 (systemd)[1265]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:57.349128 systemd[1265]: Queued start job for default target default.target. Jul 10 00:34:57.349736 systemd[1265]: Reached target paths.target. Jul 10 00:34:57.349756 systemd[1265]: Reached target sockets.target. Jul 10 00:34:57.349769 systemd[1265]: Reached target timers.target. Jul 10 00:34:57.349780 systemd[1265]: Reached target basic.target. Jul 10 00:34:57.349818 systemd[1265]: Reached target default.target. Jul 10 00:34:57.349841 systemd[1265]: Startup finished in 67ms. Jul 10 00:34:57.349929 systemd[1]: Started user@500.service. Jul 10 00:34:57.350925 systemd[1]: Started session-1.scope. Jul 10 00:34:57.404344 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:34906.service. Jul 10 00:34:57.438702 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:34:57.439952 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:57.443384 systemd-logind[1186]: New session 2 of user core. Jul 10 00:34:57.444475 systemd[1]: Started session-2.scope. Jul 10 00:34:57.846778 sshd[1274]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:57.850157 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:34906.service: Deactivated successfully. Jul 10 00:34:57.850868 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:34:57.851546 systemd-logind[1186]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:34:57.852866 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:34914.service. Jul 10 00:34:57.853854 systemd-logind[1186]: Removed session 2. Jul 10 00:34:57.885797 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 34914 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:34:57.886816 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:57.889853 systemd-logind[1186]: New session 3 of user core. Jul 10 00:34:57.890792 systemd[1]: Started session-3.scope. Jul 10 00:34:57.942649 sshd[1280]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:57.945818 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:34914.service: Deactivated successfully. Jul 10 00:34:57.946437 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:34:57.947082 systemd-logind[1186]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:34:57.948284 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:34926.service. Jul 10 00:34:57.949244 systemd-logind[1186]: Removed session 3. Jul 10 00:34:57.984462 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 34926 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:34:57.985535 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:57.988831 systemd-logind[1186]: New session 4 of user core. Jul 10 00:34:57.989673 systemd[1]: Started session-4.scope. Jul 10 00:34:58.044699 sshd[1287]: pam_unix(sshd:session): session closed for user core Jul 10 00:34:58.047170 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:34926.service: Deactivated successfully. Jul 10 00:34:58.047831 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:34:58.048369 systemd-logind[1186]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:34:58.049611 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:34940.service. Jul 10 00:34:58.050429 systemd-logind[1186]: Removed session 4. Jul 10 00:34:58.081790 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 34940 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:34:58.082815 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:34:58.085842 systemd-logind[1186]: New session 5 of user core. Jul 10 00:34:58.086612 systemd[1]: Started session-5.scope. Jul 10 00:34:58.142827 sudo[1296]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:34:58.143070 sudo[1296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:34:58.175067 systemd[1]: Starting docker.service... Jul 10 00:34:58.278781 env[1307]: time="2025-07-10T00:34:58.278709880Z" level=info msg="Starting up" Jul 10 00:34:58.283309 env[1307]: time="2025-07-10T00:34:58.283266104Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:34:58.283309 env[1307]: time="2025-07-10T00:34:58.283294017Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:34:58.283459 env[1307]: time="2025-07-10T00:34:58.283319506Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:34:58.283459 env[1307]: time="2025-07-10T00:34:58.283333797Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:34:58.285212 env[1307]: time="2025-07-10T00:34:58.285190919Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:34:58.285212 env[1307]: time="2025-07-10T00:34:58.285208696Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:34:58.285277 env[1307]: time="2025-07-10T00:34:58.285219182Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:34:58.285277 env[1307]: time="2025-07-10T00:34:58.285226915Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:34:58.289944 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4096062097-merged.mount: Deactivated successfully. Jul 10 00:34:59.910864 env[1307]: time="2025-07-10T00:34:59.910814838Z" level=info msg="Loading containers: start." Jul 10 00:35:00.050433 kernel: Initializing XFRM netlink socket Jul 10 00:35:00.080488 env[1307]: time="2025-07-10T00:35:00.080437504Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 00:35:00.129435 systemd-networkd[1017]: docker0: Link UP Jul 10 00:35:00.330866 env[1307]: time="2025-07-10T00:35:00.330729387Z" level=info msg="Loading containers: done." Jul 10 00:35:00.341941 env[1307]: time="2025-07-10T00:35:00.341865085Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:35:00.342187 env[1307]: time="2025-07-10T00:35:00.342167021Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 00:35:00.342317 env[1307]: time="2025-07-10T00:35:00.342287517Z" level=info msg="Daemon has completed initialization" Jul 10 00:35:00.362156 systemd[1]: Started docker.service. Jul 10 00:35:00.370190 env[1307]: time="2025-07-10T00:35:00.370098975Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:35:01.050070 env[1198]: time="2025-07-10T00:35:01.050019158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:35:01.686966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702931182.mount: Deactivated successfully. Jul 10 00:35:03.382399 env[1198]: time="2025-07-10T00:35:03.382306675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:03.384250 env[1198]: time="2025-07-10T00:35:03.384202919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:03.386944 env[1198]: time="2025-07-10T00:35:03.386860494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:03.389204 env[1198]: time="2025-07-10T00:35:03.389153211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:03.390111 env[1198]: time="2025-07-10T00:35:03.390062051Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 10 00:35:03.391001 env[1198]: time="2025-07-10T00:35:03.390964942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:35:06.322895 env[1198]: time="2025-07-10T00:35:06.322808821Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:06.326148 env[1198]: time="2025-07-10T00:35:06.326078704Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:06.328525 env[1198]: time="2025-07-10T00:35:06.328460587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:06.331242 env[1198]: time="2025-07-10T00:35:06.331198985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:06.332252 env[1198]: time="2025-07-10T00:35:06.332216248Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 10 00:35:06.332929 env[1198]: time="2025-07-10T00:35:06.332883394Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:35:07.384961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:35:07.385266 systemd[1]: Stopped kubelet.service. Jul 10 00:35:07.385333 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Jul 10 00:35:07.387690 systemd[1]: Starting kubelet.service... Jul 10 00:35:07.524173 systemd[1]: Started kubelet.service. Jul 10 00:35:07.585843 kubelet[1444]: E0710 00:35:07.585770 1444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:07.590525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:07.590678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:09.331944 env[1198]: time="2025-07-10T00:35:09.331861143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:09.336788 env[1198]: time="2025-07-10T00:35:09.336746251Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:09.340196 env[1198]: time="2025-07-10T00:35:09.340138742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:09.344635 env[1198]: time="2025-07-10T00:35:09.344601967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:09.345511 env[1198]: time="2025-07-10T00:35:09.345467428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 10 00:35:09.346197 env[1198]: time="2025-07-10T00:35:09.346166306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:35:11.965488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302511177.mount: Deactivated successfully. Jul 10 00:35:12.898075 env[1198]: time="2025-07-10T00:35:12.898009932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:12.900126 env[1198]: time="2025-07-10T00:35:12.900084304Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:12.901645 env[1198]: time="2025-07-10T00:35:12.901616685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:12.903047 env[1198]: time="2025-07-10T00:35:12.903005937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:12.903436 env[1198]: time="2025-07-10T00:35:12.903408333Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 10 00:35:12.904070 env[1198]: time="2025-07-10T00:35:12.904048598Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:35:13.482738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077557193.mount: Deactivated successfully. Jul 10 00:35:15.197152 env[1198]: time="2025-07-10T00:35:15.197057612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.199748 env[1198]: time="2025-07-10T00:35:15.199692051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.202964 env[1198]: time="2025-07-10T00:35:15.202913395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.205592 env[1198]: time="2025-07-10T00:35:15.205523074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.206723 env[1198]: time="2025-07-10T00:35:15.206606277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 10 00:35:15.207460 env[1198]: time="2025-07-10T00:35:15.207422569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:35:15.964300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922362271.mount: Deactivated successfully. Jul 10 00:35:15.970030 env[1198]: time="2025-07-10T00:35:15.969989452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.972002 env[1198]: time="2025-07-10T00:35:15.971971761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.973713 env[1198]: time="2025-07-10T00:35:15.973676345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.975250 env[1198]: time="2025-07-10T00:35:15.975210308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.975804 env[1198]: time="2025-07-10T00:35:15.975765124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:35:15.976465 env[1198]: time="2025-07-10T00:35:15.976428370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:35:16.518962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289585468.mount: Deactivated successfully. Jul 10 00:35:17.604771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:35:17.605020 systemd[1]: Stopped kubelet.service. Jul 10 00:35:17.607456 systemd[1]: Starting kubelet.service... Jul 10 00:35:17.724620 systemd[1]: Started kubelet.service. Jul 10 00:35:17.793648 kubelet[1455]: E0710 00:35:17.793598 1455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:17.796152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:17.796324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:20.466332 env[1198]: time="2025-07-10T00:35:20.466244569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.468432 env[1198]: time="2025-07-10T00:35:20.468346382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.470499 env[1198]: time="2025-07-10T00:35:20.470460990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.472676 env[1198]: time="2025-07-10T00:35:20.472635713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.473751 env[1198]: time="2025-07-10T00:35:20.473702088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 10 00:35:23.302207 systemd[1]: Stopped kubelet.service. Jul 10 00:35:23.304341 systemd[1]: Starting kubelet.service... Jul 10 00:35:23.327582 systemd[1]: Reloading. Jul 10 00:35:23.400696 /usr/lib/systemd/system-generators/torcx-generator[1511]: time="2025-07-10T00:35:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:23.400733 /usr/lib/systemd/system-generators/torcx-generator[1511]: time="2025-07-10T00:35:23Z" level=info msg="torcx already run" Jul 10 00:35:23.992553 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:23.992575 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:24.017026 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:24.108584 systemd[1]: Started kubelet.service. Jul 10 00:35:24.111422 systemd[1]: Stopping kubelet.service... Jul 10 00:35:24.111739 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:35:24.111911 systemd[1]: Stopped kubelet.service. Jul 10 00:35:24.113427 systemd[1]: Starting kubelet.service... Jul 10 00:35:24.227514 systemd[1]: Started kubelet.service. Jul 10 00:35:24.395807 kubelet[1560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:24.395807 kubelet[1560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:35:24.395807 kubelet[1560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:24.396296 kubelet[1560]: I0710 00:35:24.395885 1560 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:35:24.971049 kubelet[1560]: I0710 00:35:24.970991 1560 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:35:24.971049 kubelet[1560]: I0710 00:35:24.971023 1560 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:35:24.971606 kubelet[1560]: I0710 00:35:24.971567 1560 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:35:25.044074 kubelet[1560]: I0710 00:35:25.044000 1560 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:25.045290 kubelet[1560]: E0710 00:35:25.045253 1560 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:35:25.052772 kubelet[1560]: E0710 00:35:25.052715 1560 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:35:25.052772 kubelet[1560]: I0710 00:35:25.052752 1560 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:35:25.057291 kubelet[1560]: I0710 00:35:25.057254 1560 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:35:25.057546 kubelet[1560]: I0710 00:35:25.057510 1560 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:35:25.057724 kubelet[1560]: I0710 00:35:25.057541 1560 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:35:25.057818 kubelet[1560]: I0710 00:35:25.057731 1560 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:35:25.057818 kubelet[1560]: I0710 00:35:25.057739 1560 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:35:25.058795 kubelet[1560]: I0710 00:35:25.058774 1560 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:25.061075 kubelet[1560]: I0710 00:35:25.061048 1560 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:35:25.061123 kubelet[1560]: I0710 00:35:25.061078 1560 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:35:25.068363 kubelet[1560]: I0710 00:35:25.068313 1560 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:35:25.071622 kubelet[1560]: I0710 00:35:25.071590 1560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:35:25.076614 kubelet[1560]: E0710 00:35:25.076552 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:35:25.076744 kubelet[1560]: E0710 00:35:25.076690 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:35:25.088129 kubelet[1560]: I0710 00:35:25.087887 1560 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:35:25.088391 kubelet[1560]: I0710 00:35:25.088353 1560 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:35:25.088920 kubelet[1560]: W0710 00:35:25.088893 1560 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:35:25.090940 kubelet[1560]: I0710 00:35:25.090924 1560 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:35:25.091016 kubelet[1560]: I0710 00:35:25.090969 1560 server.go:1289] "Started kubelet" Jul 10 00:35:25.091917 kubelet[1560]: I0710 00:35:25.091320 1560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:35:25.093695 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:35:25.093765 kubelet[1560]: I0710 00:35:25.092799 1560 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:35:25.093863 kubelet[1560]: I0710 00:35:25.092703 1560 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:35:25.094332 kubelet[1560]: I0710 00:35:25.094297 1560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:35:25.094675 kubelet[1560]: I0710 00:35:25.094649 1560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:35:25.094840 kubelet[1560]: I0710 00:35:25.094812 1560 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:35:25.095567 kubelet[1560]: E0710 00:35:25.095394 1560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:25.095891 kubelet[1560]: I0710 00:35:25.095874 1560 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:35:25.096004 kubelet[1560]: I0710 00:35:25.094898 1560 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:35:25.096856 kubelet[1560]: I0710 00:35:25.096703 1560 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:35:25.097208 kubelet[1560]: E0710 00:35:25.097174 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:35:25.097305 kubelet[1560]: E0710 00:35:25.097262 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Jul 10 00:35:25.097719 kubelet[1560]: E0710 00:35:25.097694 1560 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:35:25.098276 kubelet[1560]: I0710 00:35:25.098250 1560 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:35:25.098416 kubelet[1560]: I0710 00:35:25.098336 1560 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:35:25.101409 kubelet[1560]: E0710 00:35:25.098229 1560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bcb162436681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:35:25.090940545 +0000 UTC m=+0.859394272,LastTimestamp:2025-07-10 00:35:25.090940545 +0000 UTC m=+0.859394272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:35:25.102482 kubelet[1560]: I0710 00:35:25.102447 1560 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:35:25.113521 kubelet[1560]: I0710 00:35:25.113385 1560 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:35:25.113521 kubelet[1560]: I0710 00:35:25.113396 1560 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:35:25.113521 kubelet[1560]: I0710 00:35:25.113412 1560 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:25.120963 kubelet[1560]: I0710 00:35:25.119391 1560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:35:25.120963 kubelet[1560]: I0710 00:35:25.120350 1560 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:35:25.120963 kubelet[1560]: I0710 00:35:25.120388 1560 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:35:25.120963 kubelet[1560]: I0710 00:35:25.120418 1560 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:35:25.120963 kubelet[1560]: I0710 00:35:25.120432 1560 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:35:25.120963 kubelet[1560]: E0710 00:35:25.120481 1560 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:35:25.121525 kubelet[1560]: E0710 00:35:25.121479 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:35:25.196227 kubelet[1560]: E0710 00:35:25.196156 1560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:25.221687 kubelet[1560]: E0710 00:35:25.221531 1560 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:35:25.297028 kubelet[1560]: E0710 00:35:25.296967 1560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:25.298478 kubelet[1560]: E0710 00:35:25.298444 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Jul 10 00:35:25.397177 kubelet[1560]: E0710 00:35:25.397126 1560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:25.422434 kubelet[1560]: E0710 00:35:25.422397 1560 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:35:25.498201 kubelet[1560]: E0710 00:35:25.497999 1560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:25.532010 kubelet[1560]: I0710 00:35:25.531927 1560 policy_none.go:49] "None policy: Start" Jul 10 00:35:25.532010 kubelet[1560]: I0710 00:35:25.531984 1560 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:35:25.532010 kubelet[1560]: I0710 00:35:25.532003 1560 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:35:25.538408 systemd[1]: Created slice kubepods.slice. Jul 10 00:35:25.542361 systemd[1]: Created slice kubepods-burstable.slice. Jul 10 00:35:25.545532 systemd[1]: Created slice kubepods-besteffort.slice. Jul 10 00:35:25.552042 kubelet[1560]: E0710 00:35:25.552007 1560 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:35:25.552191 kubelet[1560]: I0710 00:35:25.552174 1560 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:35:25.552252 kubelet[1560]: I0710 00:35:25.552190 1560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:35:25.552484 kubelet[1560]: I0710 00:35:25.552469 1560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:35:25.553570 kubelet[1560]: E0710 00:35:25.553545 1560 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:35:25.553645 kubelet[1560]: E0710 00:35:25.553590 1560 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:35:25.654482 kubelet[1560]: I0710 00:35:25.654413 1560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:35:25.654979 kubelet[1560]: E0710 00:35:25.654924 1560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:35:25.699858 kubelet[1560]: E0710 00:35:25.699800 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Jul 10 00:35:25.832603 systemd[1]: Created slice kubepods-burstable-pod59db48be53491f0b7aea4125322eeb1b.slice. Jul 10 00:35:25.841002 kubelet[1560]: E0710 00:35:25.840974 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:25.843650 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 00:35:25.845053 kubelet[1560]: E0710 00:35:25.845031 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:25.846697 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 00:35:25.847881 kubelet[1560]: E0710 00:35:25.847865 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:25.856620 kubelet[1560]: I0710 00:35:25.856598 1560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:35:25.856924 kubelet[1560]: E0710 00:35:25.856894 1560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:35:25.901300 kubelet[1560]: I0710 00:35:25.901275 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:25.901415 kubelet[1560]: I0710 00:35:25.901307 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:25.901415 kubelet[1560]: I0710 00:35:25.901327 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59db48be53491f0b7aea4125322eeb1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"59db48be53491f0b7aea4125322eeb1b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:25.901415 kubelet[1560]: I0710 00:35:25.901346 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59db48be53491f0b7aea4125322eeb1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"59db48be53491f0b7aea4125322eeb1b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:25.901487 kubelet[1560]: I0710 00:35:25.901431 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:25.901487 kubelet[1560]: I0710 00:35:25.901456 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:25.901487 kubelet[1560]: I0710 00:35:25.901473 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:25.901658 kubelet[1560]: I0710 00:35:25.901493 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59db48be53491f0b7aea4125322eeb1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"59db48be53491f0b7aea4125322eeb1b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:25.901658 kubelet[1560]: I0710 00:35:25.901514 1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:25.954750 kubelet[1560]: E0710 00:35:25.954725 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:35:25.956104 kubelet[1560]: E0710 00:35:25.956081 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:35:26.141947 kubelet[1560]: E0710 00:35:26.141901 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.142512 env[1198]: time="2025-07-10T00:35:26.142472381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:59db48be53491f0b7aea4125322eeb1b,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:26.145695 kubelet[1560]: E0710 00:35:26.145662 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.146028 env[1198]: time="2025-07-10T00:35:26.145992804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:26.148143 kubelet[1560]: E0710 00:35:26.148109 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.148428 env[1198]: time="2025-07-10T00:35:26.148393069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:26.268037 kubelet[1560]: I0710 00:35:26.267994 1560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:35:26.268620 kubelet[1560]: E0710 00:35:26.268561 1560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:35:26.288288 kubelet[1560]: E0710 00:35:26.288226 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:35:26.500840 kubelet[1560]: E0710 00:35:26.500708 1560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Jul 10 00:35:26.711820 kubelet[1560]: E0710 00:35:26.711749 1560 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:35:27.069892 kubelet[1560]: I0710 00:35:27.069858 1560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:35:27.070251 kubelet[1560]: E0710 00:35:27.070219 1560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:35:27.076050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274632021.mount: Deactivated successfully. Jul 10 00:35:27.136019 kubelet[1560]: E0710 00:35:27.135972 1560 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:35:27.475990 env[1198]: time="2025-07-10T00:35:27.475924430Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.487037 env[1198]: time="2025-07-10T00:35:27.486966562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.489623 env[1198]: time="2025-07-10T00:35:27.489579319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.490856 env[1198]: time="2025-07-10T00:35:27.490798903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.493307 env[1198]: time="2025-07-10T00:35:27.493280731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.494849 env[1198]: time="2025-07-10T00:35:27.494809787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.496387 env[1198]: time="2025-07-10T00:35:27.496327500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.497883 env[1198]: time="2025-07-10T00:35:27.497821768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.499054 env[1198]: time="2025-07-10T00:35:27.499001339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.500601 env[1198]: time="2025-07-10T00:35:27.500556889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.502906 env[1198]: time="2025-07-10T00:35:27.502868949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.505407 env[1198]: time="2025-07-10T00:35:27.505364897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:27.524253 env[1198]: time="2025-07-10T00:35:27.524147926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:27.524253 env[1198]: time="2025-07-10T00:35:27.524212046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:27.524253 env[1198]: time="2025-07-10T00:35:27.524224923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:27.524517 env[1198]: time="2025-07-10T00:35:27.524360795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5eccb9b5243573cb48e3fb8429c8df2d77d4a1fd823c5a27953a09794f1a7b1 pid=1605 runtime=io.containerd.runc.v2 Jul 10 00:35:27.534209 env[1198]: time="2025-07-10T00:35:27.533355071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:27.534209 env[1198]: time="2025-07-10T00:35:27.533400509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:27.534209 env[1198]: time="2025-07-10T00:35:27.533410507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:27.534209 env[1198]: time="2025-07-10T00:35:27.533563157Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/411002287d46cceadbfda176d944a7dd933eef09be8f186d153dc3681792193b pid=1622 runtime=io.containerd.runc.v2 Jul 10 00:35:27.545317 env[1198]: time="2025-07-10T00:35:27.545083927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:27.545317 env[1198]: time="2025-07-10T00:35:27.545133356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:27.545317 env[1198]: time="2025-07-10T00:35:27.545146563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:27.545610 env[1198]: time="2025-07-10T00:35:27.545501613Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdb5bde28fec3fefae826232ad4e8287b381ad48e1563111acd476af8d45ec12 pid=1651 runtime=io.containerd.runc.v2 Jul 10 00:35:27.548402 systemd[1]: Started cri-containerd-c5eccb9b5243573cb48e3fb8429c8df2d77d4a1fd823c5a27953a09794f1a7b1.scope. Jul 10 00:35:27.554039 systemd[1]: Started cri-containerd-411002287d46cceadbfda176d944a7dd933eef09be8f186d153dc3681792193b.scope. Jul 10 00:35:27.562933 systemd[1]: Started cri-containerd-bdb5bde28fec3fefae826232ad4e8287b381ad48e1563111acd476af8d45ec12.scope. Jul 10 00:35:27.599194 env[1198]: time="2025-07-10T00:35:27.599139774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:59db48be53491f0b7aea4125322eeb1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5eccb9b5243573cb48e3fb8429c8df2d77d4a1fd823c5a27953a09794f1a7b1\"" Jul 10 00:35:27.602433 env[1198]: time="2025-07-10T00:35:27.602396664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"411002287d46cceadbfda176d944a7dd933eef09be8f186d153dc3681792193b\"" Jul 10 00:35:27.604099 kubelet[1560]: E0710 00:35:27.604075 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:27.604963 kubelet[1560]: E0710 00:35:27.604854 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:27.609309 env[1198]: time="2025-07-10T00:35:27.609260511Z" level=info msg="CreateContainer within sandbox \"c5eccb9b5243573cb48e3fb8429c8df2d77d4a1fd823c5a27953a09794f1a7b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:35:27.611071 env[1198]: time="2025-07-10T00:35:27.611038658Z" level=info msg="CreateContainer within sandbox \"411002287d46cceadbfda176d944a7dd933eef09be8f186d153dc3681792193b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:35:27.612290 env[1198]: time="2025-07-10T00:35:27.612264109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdb5bde28fec3fefae826232ad4e8287b381ad48e1563111acd476af8d45ec12\"" Jul 10 00:35:27.613859 kubelet[1560]: E0710 00:35:27.613830 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:27.618819 env[1198]: time="2025-07-10T00:35:27.618776044Z" level=info msg="CreateContainer within sandbox \"bdb5bde28fec3fefae826232ad4e8287b381ad48e1563111acd476af8d45ec12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:35:27.636811 env[1198]: time="2025-07-10T00:35:27.636757526Z" level=info msg="CreateContainer within sandbox \"411002287d46cceadbfda176d944a7dd933eef09be8f186d153dc3681792193b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a248757850690e4b0aa28f63fcc3122677c5552257a5eb96e1021259208a7ca9\"" Jul 10 00:35:27.637513 env[1198]: time="2025-07-10T00:35:27.637481926Z" level=info msg="StartContainer for \"a248757850690e4b0aa28f63fcc3122677c5552257a5eb96e1021259208a7ca9\"" Jul 10 00:35:27.641256 env[1198]: time="2025-07-10T00:35:27.641215979Z" level=info msg="CreateContainer within sandbox \"c5eccb9b5243573cb48e3fb8429c8df2d77d4a1fd823c5a27953a09794f1a7b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"14d714ab363c354451ee54e439502792b7fbbdb1532a14b726247d8b026d8c91\"" Jul 10 00:35:27.641877 env[1198]: time="2025-07-10T00:35:27.641852481Z" level=info msg="StartContainer for \"14d714ab363c354451ee54e439502792b7fbbdb1532a14b726247d8b026d8c91\"" Jul 10 00:35:27.645347 env[1198]: time="2025-07-10T00:35:27.645288457Z" level=info msg="CreateContainer within sandbox \"bdb5bde28fec3fefae826232ad4e8287b381ad48e1563111acd476af8d45ec12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5920348420746ac60d050e4959cd88bdc1cc746eff18cfaf9cdfd3ff79e1568d\"" Jul 10 00:35:27.646314 env[1198]: time="2025-07-10T00:35:27.646284000Z" level=info msg="StartContainer for \"5920348420746ac60d050e4959cd88bdc1cc746eff18cfaf9cdfd3ff79e1568d\"" Jul 10 00:35:27.653119 systemd[1]: Started cri-containerd-a248757850690e4b0aa28f63fcc3122677c5552257a5eb96e1021259208a7ca9.scope. Jul 10 00:35:27.658011 systemd[1]: Started cri-containerd-14d714ab363c354451ee54e439502792b7fbbdb1532a14b726247d8b026d8c91.scope. Jul 10 00:35:27.674450 systemd[1]: Started cri-containerd-5920348420746ac60d050e4959cd88bdc1cc746eff18cfaf9cdfd3ff79e1568d.scope. Jul 10 00:35:27.703088 env[1198]: time="2025-07-10T00:35:27.703018890Z" level=info msg="StartContainer for \"a248757850690e4b0aa28f63fcc3122677c5552257a5eb96e1021259208a7ca9\" returns successfully" Jul 10 00:35:27.712808 env[1198]: time="2025-07-10T00:35:27.712757250Z" level=info msg="StartContainer for \"14d714ab363c354451ee54e439502792b7fbbdb1532a14b726247d8b026d8c91\" returns successfully" Jul 10 00:35:27.723579 env[1198]: time="2025-07-10T00:35:27.723501934Z" level=info msg="StartContainer for \"5920348420746ac60d050e4959cd88bdc1cc746eff18cfaf9cdfd3ff79e1568d\" returns successfully" Jul 10 00:35:28.129261 kubelet[1560]: E0710 00:35:28.129004 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:28.129261 kubelet[1560]: E0710 00:35:28.129136 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:28.131347 kubelet[1560]: E0710 00:35:28.131186 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:28.131347 kubelet[1560]: E0710 00:35:28.131276 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:28.149395 kubelet[1560]: E0710 00:35:28.149354 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:28.149795 kubelet[1560]: E0710 00:35:28.149781 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:28.671929 kubelet[1560]: I0710 00:35:28.671872 1560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:35:28.971160 kubelet[1560]: E0710 00:35:28.971034 1560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:35:29.073727 kubelet[1560]: I0710 00:35:29.073667 1560 apiserver.go:52] "Watching apiserver" Jul 10 00:35:29.096214 kubelet[1560]: I0710 00:35:29.096173 1560 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:35:29.134551 kubelet[1560]: E0710 00:35:29.134515 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:29.134636 kubelet[1560]: E0710 00:35:29.134563 1560 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:35:29.134696 kubelet[1560]: E0710 00:35:29.134669 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:29.134731 kubelet[1560]: E0710 00:35:29.134692 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:29.485838 kubelet[1560]: I0710 00:35:29.485788 1560 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:35:29.485838 kubelet[1560]: E0710 00:35:29.485826 1560 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:35:29.489915 kubelet[1560]: E0710 00:35:29.489816 1560 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1850bcb162436681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:35:25.090940545 +0000 UTC m=+0.859394272,LastTimestamp:2025-07-10 00:35:25.090940545 +0000 UTC m=+0.859394272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:35:29.496027 kubelet[1560]: I0710 00:35:29.495966 1560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:29.621880 kubelet[1560]: I0710 00:35:29.621837 1560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:29.818425 kubelet[1560]: E0710 00:35:29.818281 1560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:29.818857 kubelet[1560]: I0710 00:35:29.818841 1560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:29.819271 kubelet[1560]: E0710 00:35:29.819192 1560 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1850bcb162aa41e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:35:25.097681376 +0000 UTC m=+0.866135113,LastTimestamp:2025-07-10 00:35:25.097681376 +0000 UTC m=+0.866135113,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:35:29.819631 kubelet[1560]: E0710 00:35:29.819599 1560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:29.819901 kubelet[1560]: E0710 00:35:29.819887 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:29.820501 kubelet[1560]: E0710 00:35:29.820452 1560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:29.820501 kubelet[1560]: I0710 00:35:29.820481 1560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:29.822546 kubelet[1560]: E0710 00:35:29.822503 1560 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:30.135500 kubelet[1560]: I0710 00:35:30.135473 1560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:30.431155 kubelet[1560]: I0710 00:35:30.430897 1560 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:30.791309 kubelet[1560]: E0710 00:35:30.790911 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:30.791494 kubelet[1560]: E0710 00:35:30.791318 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:31.136779 kubelet[1560]: E0710 00:35:31.136740 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:31.137160 kubelet[1560]: E0710 00:35:31.136820 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:32.308174 systemd[1]: Reloading. Jul 10 00:35:32.382490 /usr/lib/systemd/system-generators/torcx-generator[1871]: time="2025-07-10T00:35:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:32.382526 /usr/lib/systemd/system-generators/torcx-generator[1871]: time="2025-07-10T00:35:32Z" level=info msg="torcx already run" Jul 10 00:35:32.442979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:32.442997 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:32.460024 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:32.552281 systemd[1]: Stopping kubelet.service... Jul 10 00:35:32.576088 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:35:32.576315 systemd[1]: Stopped kubelet.service. Jul 10 00:35:32.576408 systemd[1]: kubelet.service: Consumed 1.267s CPU time. Jul 10 00:35:32.578619 systemd[1]: Starting kubelet.service... Jul 10 00:35:32.689019 systemd[1]: Started kubelet.service. Jul 10 00:35:32.734567 kubelet[1917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:32.734567 kubelet[1917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:35:32.734567 kubelet[1917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:32.735068 kubelet[1917]: I0710 00:35:32.734606 1917 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:35:32.742725 kubelet[1917]: I0710 00:35:32.742677 1917 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:35:32.742725 kubelet[1917]: I0710 00:35:32.742703 1917 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:35:32.742889 kubelet[1917]: I0710 00:35:32.742877 1917 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:35:32.743994 kubelet[1917]: I0710 00:35:32.743959 1917 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:35:32.745969 kubelet[1917]: I0710 00:35:32.745933 1917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:32.749697 kubelet[1917]: E0710 00:35:32.749661 1917 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:35:32.749798 kubelet[1917]: I0710 00:35:32.749782 1917 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:35:32.755615 kubelet[1917]: I0710 00:35:32.755573 1917 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:35:32.755878 kubelet[1917]: I0710 00:35:32.755834 1917 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:35:32.756045 kubelet[1917]: I0710 00:35:32.755864 1917 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:35:32.756045 kubelet[1917]: I0710 00:35:32.756041 1917 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:35:32.756177 kubelet[1917]: I0710 00:35:32.756050 1917 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:35:32.756177 kubelet[1917]: I0710 00:35:32.756090 1917 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:32.756245 kubelet[1917]: I0710 00:35:32.756230 1917 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:35:32.756273 kubelet[1917]: I0710 00:35:32.756257 1917 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:35:32.756325 kubelet[1917]: I0710 00:35:32.756296 1917 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:35:32.756325 kubelet[1917]: I0710 00:35:32.756323 1917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:35:32.758349 kubelet[1917]: I0710 00:35:32.758285 1917 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:35:32.759355 kubelet[1917]: I0710 00:35:32.759320 1917 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:35:32.763957 kubelet[1917]: I0710 00:35:32.763938 1917 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:35:32.764023 kubelet[1917]: I0710 00:35:32.763977 1917 server.go:1289] "Started kubelet" Jul 10 00:35:32.766932 kubelet[1917]: I0710 00:35:32.766899 1917 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:35:32.771414 kubelet[1917]: I0710 00:35:32.768999 1917 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:35:32.773257 kubelet[1917]: E0710 00:35:32.773233 1917 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:35:32.773431 kubelet[1917]: I0710 00:35:32.773387 1917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:35:32.773735 kubelet[1917]: I0710 00:35:32.773713 1917 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:35:33.100257 kubelet[1917]: I0710 00:35:33.100179 1917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:35:33.100572 kubelet[1917]: I0710 00:35:33.100545 1917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:35:33.101887 kubelet[1917]: I0710 00:35:33.101840 1917 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:35:33.102246 kubelet[1917]: I0710 00:35:33.102211 1917 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:35:33.102646 kubelet[1917]: I0710 00:35:33.102619 1917 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:35:33.103513 kubelet[1917]: I0710 00:35:33.103492 1917 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:35:33.103746 kubelet[1917]: I0710 00:35:33.103716 1917 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:35:33.106395 kubelet[1917]: I0710 00:35:33.106342 1917 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:35:33.113906 kubelet[1917]: I0710 00:35:33.113842 1917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:35:33.115289 kubelet[1917]: I0710 00:35:33.115225 1917 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:35:33.115289 kubelet[1917]: I0710 00:35:33.115262 1917 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:35:33.115724 kubelet[1917]: I0710 00:35:33.115320 1917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:35:33.115724 kubelet[1917]: I0710 00:35:33.115334 1917 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:35:33.115724 kubelet[1917]: E0710 00:35:33.115422 1917 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:35:33.147169 kubelet[1917]: I0710 00:35:33.147119 1917 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:35:33.147169 kubelet[1917]: I0710 00:35:33.147148 1917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:35:33.147169 kubelet[1917]: I0710 00:35:33.147172 1917 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:33.147465 kubelet[1917]: I0710 00:35:33.147348 1917 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:35:33.147465 kubelet[1917]: I0710 00:35:33.147364 1917 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:35:33.147465 kubelet[1917]: I0710 00:35:33.147484 1917 policy_none.go:49] "None policy: Start" Jul 10 00:35:33.147591 kubelet[1917]: I0710 00:35:33.147495 1917 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:35:33.147591 kubelet[1917]: I0710 00:35:33.147506 1917 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:35:33.147666 kubelet[1917]: I0710 00:35:33.147625 1917 state_mem.go:75] "Updated machine memory state" Jul 10 00:35:33.152653 kubelet[1917]: E0710 00:35:33.152609 1917 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:35:33.152822 kubelet[1917]: I0710 00:35:33.152814 1917 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:35:33.152891 kubelet[1917]: I0710 00:35:33.152828 1917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:35:33.153210 kubelet[1917]: I0710 00:35:33.153180 1917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:35:33.155673 kubelet[1917]: E0710 00:35:33.155649 1917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:35:33.217233 kubelet[1917]: I0710 00:35:33.217152 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:33.217463 kubelet[1917]: I0710 00:35:33.217183 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:33.217463 kubelet[1917]: I0710 00:35:33.217161 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:33.225763 kubelet[1917]: E0710 00:35:33.225700 1917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:33.225954 kubelet[1917]: E0710 00:35:33.225816 1917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:33.258978 kubelet[1917]: I0710 00:35:33.258935 1917 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:35:33.303679 kubelet[1917]: I0710 00:35:33.303601 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59db48be53491f0b7aea4125322eeb1b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"59db48be53491f0b7aea4125322eeb1b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:33.303679 kubelet[1917]: I0710 00:35:33.303657 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59db48be53491f0b7aea4125322eeb1b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"59db48be53491f0b7aea4125322eeb1b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:33.303679 kubelet[1917]: I0710 00:35:33.303683 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:33.303942 kubelet[1917]: I0710 00:35:33.303773 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:33.303942 kubelet[1917]: I0710 00:35:33.303832 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:33.303942 kubelet[1917]: I0710 00:35:33.303896 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59db48be53491f0b7aea4125322eeb1b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"59db48be53491f0b7aea4125322eeb1b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:33.303942 kubelet[1917]: I0710 00:35:33.303938 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:33.304109 kubelet[1917]: I0710 00:35:33.303967 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:33.304109 kubelet[1917]: I0710 00:35:33.303990 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:33.503681 kubelet[1917]: I0710 00:35:33.503631 1917 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:35:33.503989 kubelet[1917]: I0710 00:35:33.503977 1917 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:35:33.514949 sudo[1956]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:35:33.515165 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 10 00:35:33.526885 kubelet[1917]: E0710 00:35:33.526834 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:33.527060 kubelet[1917]: E0710 00:35:33.526900 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:33.527060 kubelet[1917]: E0710 00:35:33.527043 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:33.757643 kubelet[1917]: I0710 00:35:33.757529 1917 apiserver.go:52] "Watching apiserver" Jul 10 00:35:33.804545 kubelet[1917]: I0710 00:35:33.804258 1917 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:35:34.090288 sudo[1956]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:34.131959 kubelet[1917]: I0710 00:35:34.131915 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:34.132253 kubelet[1917]: I0710 00:35:34.132229 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:34.132432 kubelet[1917]: I0710 00:35:34.132413 1917 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:34.200112 kubelet[1917]: E0710 00:35:34.199924 1917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:34.200303 kubelet[1917]: E0710 00:35:34.200248 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:34.201094 kubelet[1917]: E0710 00:35:34.201075 1917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:34.201326 kubelet[1917]: E0710 00:35:34.201306 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:34.201441 kubelet[1917]: E0710 00:35:34.201431 1917 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:34.201552 kubelet[1917]: E0710 00:35:34.201529 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:34.209820 kubelet[1917]: I0710 00:35:34.209740 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.209717469 podStartE2EDuration="4.209717469s" podCreationTimestamp="2025-07-10 00:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:34.201633861 +0000 UTC m=+1.507836946" watchObservedRunningTime="2025-07-10 00:35:34.209717469 +0000 UTC m=+1.515920555" Jul 10 00:35:34.219954 kubelet[1917]: I0710 00:35:34.219874 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.21985161 podStartE2EDuration="4.21985161s" podCreationTimestamp="2025-07-10 00:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:34.210213045 +0000 UTC m=+1.516416130" watchObservedRunningTime="2025-07-10 00:35:34.21985161 +0000 UTC m=+1.526054695" Jul 10 00:35:34.229642 kubelet[1917]: I0710 00:35:34.229581 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2295601600000001 podStartE2EDuration="1.22956016s" podCreationTimestamp="2025-07-10 00:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:34.220398343 +0000 UTC m=+1.526601428" watchObservedRunningTime="2025-07-10 00:35:34.22956016 +0000 UTC m=+1.535763245" Jul 10 00:35:35.133401 kubelet[1917]: E0710 00:35:35.133341 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:35.133401 kubelet[1917]: E0710 00:35:35.133359 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:35.133991 kubelet[1917]: E0710 00:35:35.133591 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.084822 sudo[1296]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:36.086653 sshd[1293]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:36.089359 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:34940.service: Deactivated successfully. Jul 10 00:35:36.090243 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:35:36.090435 systemd[1]: session-5.scope: Consumed 5.317s CPU time. Jul 10 00:35:36.090862 systemd-logind[1186]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:35:36.091627 systemd-logind[1186]: Removed session 5. Jul 10 00:35:36.135033 kubelet[1917]: E0710 00:35:36.134997 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.135540 kubelet[1917]: E0710 00:35:36.135081 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.527835 kubelet[1917]: I0710 00:35:37.527784 1917 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:35:37.528261 env[1198]: time="2025-07-10T00:35:37.528228881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:35:37.528547 kubelet[1917]: I0710 00:35:37.528442 1917 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:35:38.339987 systemd[1]: Created slice kubepods-besteffort-podcec901cb_a4b3_452d_9a5c_6b86c746ea48.slice. Jul 10 00:35:38.414426 systemd[1]: Created slice kubepods-burstable-pod169dd06f_d173_4dfe_8294_687335d47d83.slice. Jul 10 00:35:38.437502 kubelet[1917]: I0710 00:35:38.437423 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-net\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437502 kubelet[1917]: I0710 00:35:38.437491 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-hubble-tls\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437502 kubelet[1917]: I0710 00:35:38.437519 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-hostproc\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437873 kubelet[1917]: I0710 00:35:38.437539 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-cgroup\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437873 kubelet[1917]: I0710 00:35:38.437560 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-xtables-lock\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437873 kubelet[1917]: I0710 00:35:38.437579 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169dd06f-d173-4dfe-8294-687335d47d83-cilium-config-path\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437873 kubelet[1917]: I0710 00:35:38.437597 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-run\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.437873 kubelet[1917]: I0710 00:35:38.437621 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cec901cb-a4b3-452d-9a5c-6b86c746ea48-kube-proxy\") pod \"kube-proxy-kgdwx\" (UID: \"cec901cb-a4b3-452d-9a5c-6b86c746ea48\") " pod="kube-system/kube-proxy-kgdwx" Jul 10 00:35:38.437873 kubelet[1917]: I0710 00:35:38.437641 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cec901cb-a4b3-452d-9a5c-6b86c746ea48-xtables-lock\") pod \"kube-proxy-kgdwx\" (UID: \"cec901cb-a4b3-452d-9a5c-6b86c746ea48\") " pod="kube-system/kube-proxy-kgdwx" Jul 10 00:35:38.438061 kubelet[1917]: I0710 00:35:38.437663 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cec901cb-a4b3-452d-9a5c-6b86c746ea48-lib-modules\") pod \"kube-proxy-kgdwx\" (UID: \"cec901cb-a4b3-452d-9a5c-6b86c746ea48\") " pod="kube-system/kube-proxy-kgdwx" Jul 10 00:35:38.438061 kubelet[1917]: I0710 00:35:38.437681 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cni-path\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.438061 kubelet[1917]: I0710 00:35:38.437702 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-kernel\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.438061 kubelet[1917]: I0710 00:35:38.437725 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6g4\" (UniqueName: \"kubernetes.io/projected/cec901cb-a4b3-452d-9a5c-6b86c746ea48-kube-api-access-xv6g4\") pod \"kube-proxy-kgdwx\" (UID: \"cec901cb-a4b3-452d-9a5c-6b86c746ea48\") " pod="kube-system/kube-proxy-kgdwx" Jul 10 00:35:38.438061 kubelet[1917]: I0710 00:35:38.437745 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-etc-cni-netd\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.438207 kubelet[1917]: I0710 00:35:38.437768 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc945\" (UniqueName: \"kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-kube-api-access-zc945\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.438207 kubelet[1917]: I0710 00:35:38.437790 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-bpf-maps\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.438207 kubelet[1917]: I0710 00:35:38.437808 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-lib-modules\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.438207 kubelet[1917]: I0710 00:35:38.437829 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/169dd06f-d173-4dfe-8294-687335d47d83-clustermesh-secrets\") pod \"cilium-sbtmz\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " pod="kube-system/cilium-sbtmz" Jul 10 00:35:38.477592 systemd[1]: Created slice kubepods-besteffort-pod62a42ec5_d39b_4b81_9a19_4de4895106ca.slice. Jul 10 00:35:38.538489 kubelet[1917]: I0710 00:35:38.538438 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v74kl\" (UniqueName: \"kubernetes.io/projected/62a42ec5-d39b-4b81-9a19-4de4895106ca-kube-api-access-v74kl\") pod \"cilium-operator-6c4d7847fc-fln79\" (UID: \"62a42ec5-d39b-4b81-9a19-4de4895106ca\") " pod="kube-system/cilium-operator-6c4d7847fc-fln79" Jul 10 00:35:38.538964 kubelet[1917]: I0710 00:35:38.538583 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62a42ec5-d39b-4b81-9a19-4de4895106ca-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fln79\" (UID: \"62a42ec5-d39b-4b81-9a19-4de4895106ca\") " pod="kube-system/cilium-operator-6c4d7847fc-fln79" Jul 10 00:35:38.538964 kubelet[1917]: I0710 00:35:38.538884 1917 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:35:38.652325 kubelet[1917]: E0710 00:35:38.652265 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.652785 env[1198]: time="2025-07-10T00:35:38.652747616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kgdwx,Uid:cec901cb-a4b3-452d-9a5c-6b86c746ea48,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:38.671117 env[1198]: time="2025-07-10T00:35:38.671026117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:38.671117 env[1198]: time="2025-07-10T00:35:38.671073340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:38.671117 env[1198]: time="2025-07-10T00:35:38.671084377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:38.672274 env[1198]: time="2025-07-10T00:35:38.671412050Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0668b03781fa3a0183e66b658e4304662b4cba04c28d2e5e3768bda39d80f8c3 pid=2012 runtime=io.containerd.runc.v2 Jul 10 00:35:38.684422 systemd[1]: Started cri-containerd-0668b03781fa3a0183e66b658e4304662b4cba04c28d2e5e3768bda39d80f8c3.scope. Jul 10 00:35:38.705395 env[1198]: time="2025-07-10T00:35:38.705324147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kgdwx,Uid:cec901cb-a4b3-452d-9a5c-6b86c746ea48,Namespace:kube-system,Attempt:0,} returns sandbox id \"0668b03781fa3a0183e66b658e4304662b4cba04c28d2e5e3768bda39d80f8c3\"" Jul 10 00:35:38.706256 kubelet[1917]: E0710 00:35:38.706231 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.718072 kubelet[1917]: E0710 00:35:38.718021 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.718427 env[1198]: time="2025-07-10T00:35:38.718203696Z" level=info msg="CreateContainer within sandbox \"0668b03781fa3a0183e66b658e4304662b4cba04c28d2e5e3768bda39d80f8c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:35:38.718489 env[1198]: time="2025-07-10T00:35:38.718450948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sbtmz,Uid:169dd06f-d173-4dfe-8294-687335d47d83,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:38.740679 env[1198]: time="2025-07-10T00:35:38.740615240Z" level=info msg="CreateContainer within sandbox \"0668b03781fa3a0183e66b658e4304662b4cba04c28d2e5e3768bda39d80f8c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9cdc983d7091888b341ee68a5d0a24780a620529d325f61230f4035b4070e98f\"" Jul 10 00:35:38.741515 env[1198]: time="2025-07-10T00:35:38.741458077Z" level=info msg="StartContainer for \"9cdc983d7091888b341ee68a5d0a24780a620529d325f61230f4035b4070e98f\"" Jul 10 00:35:38.746475 env[1198]: time="2025-07-10T00:35:38.746404585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:38.746582 env[1198]: time="2025-07-10T00:35:38.746477379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:38.746582 env[1198]: time="2025-07-10T00:35:38.746500174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:38.746836 env[1198]: time="2025-07-10T00:35:38.746773327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa pid=2055 runtime=io.containerd.runc.v2 Jul 10 00:35:38.758418 systemd[1]: Started cri-containerd-9cdc983d7091888b341ee68a5d0a24780a620529d325f61230f4035b4070e98f.scope. Jul 10 00:35:38.764290 systemd[1]: Started cri-containerd-8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa.scope. Jul 10 00:35:38.781067 kubelet[1917]: E0710 00:35:38.780975 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.782730 env[1198]: time="2025-07-10T00:35:38.782683702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fln79,Uid:62a42ec5-d39b-4b81-9a19-4de4895106ca,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:38.792419 env[1198]: time="2025-07-10T00:35:38.792336967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sbtmz,Uid:169dd06f-d173-4dfe-8294-687335d47d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\"" Jul 10 00:35:38.793047 kubelet[1917]: E0710 00:35:38.792997 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:38.794661 env[1198]: time="2025-07-10T00:35:38.794361889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:35:38.801621 env[1198]: time="2025-07-10T00:35:38.801562979Z" level=info msg="StartContainer for \"9cdc983d7091888b341ee68a5d0a24780a620529d325f61230f4035b4070e98f\" returns successfully" Jul 10 00:35:38.815714 env[1198]: time="2025-07-10T00:35:38.815618120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:38.815879 env[1198]: time="2025-07-10T00:35:38.815721166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:38.815879 env[1198]: time="2025-07-10T00:35:38.815752502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:38.816022 env[1198]: time="2025-07-10T00:35:38.815980326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844 pid=2126 runtime=io.containerd.runc.v2 Jul 10 00:35:38.829837 systemd[1]: Started cri-containerd-5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844.scope. Jul 10 00:35:38.870805 env[1198]: time="2025-07-10T00:35:38.870736519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fln79,Uid:62a42ec5-d39b-4b81-9a19-4de4895106ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844\"" Jul 10 00:35:38.871852 kubelet[1917]: E0710 00:35:38.871820 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:39.141586 kubelet[1917]: E0710 00:35:39.141543 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:39.152319 kubelet[1917]: I0710 00:35:39.152240 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kgdwx" podStartSLOduration=2.152202258 podStartE2EDuration="2.152202258s" podCreationTimestamp="2025-07-10 00:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:39.152032367 +0000 UTC m=+6.458235462" watchObservedRunningTime="2025-07-10 00:35:39.152202258 +0000 UTC m=+6.458405343" Jul 10 00:35:40.040816 update_engine[1189]: I0710 00:35:40.040741 1189 update_attempter.cc:509] Updating boot flags... Jul 10 00:35:42.428882 kubelet[1917]: E0710 00:35:42.426830 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:43.149361 kubelet[1917]: E0710 00:35:43.149301 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:44.151969 kubelet[1917]: E0710 00:35:44.151921 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:44.930019 kubelet[1917]: E0710 00:35:44.929981 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:44.937744 kubelet[1917]: E0710 00:35:44.935655 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:45.153100 kubelet[1917]: E0710 00:35:45.153067 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:46.671793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149089726.mount: Deactivated successfully. Jul 10 00:35:50.834110 env[1198]: time="2025-07-10T00:35:50.834058304Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:50.835933 env[1198]: time="2025-07-10T00:35:50.835899589Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:50.838082 env[1198]: time="2025-07-10T00:35:50.838038591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:50.838667 env[1198]: time="2025-07-10T00:35:50.838594728Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:35:50.840090 env[1198]: time="2025-07-10T00:35:50.840057883Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:35:50.844617 env[1198]: time="2025-07-10T00:35:50.844556926Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:35:50.859543 env[1198]: time="2025-07-10T00:35:50.859463522Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\"" Jul 10 00:35:50.860146 env[1198]: time="2025-07-10T00:35:50.860115247Z" level=info msg="StartContainer for \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\"" Jul 10 00:35:50.881156 systemd[1]: Started cri-containerd-076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33.scope. Jul 10 00:35:51.053732 systemd[1]: cri-containerd-076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33.scope: Deactivated successfully. Jul 10 00:35:51.139462 env[1198]: time="2025-07-10T00:35:51.139123745Z" level=info msg="StartContainer for \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\" returns successfully" Jul 10 00:35:51.162931 kubelet[1917]: E0710 00:35:51.162885 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:51.703960 env[1198]: time="2025-07-10T00:35:51.703899196Z" level=info msg="shim disconnected" id=076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33 Jul 10 00:35:51.703960 env[1198]: time="2025-07-10T00:35:51.703956510Z" level=warning msg="cleaning up after shim disconnected" id=076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33 namespace=k8s.io Jul 10 00:35:51.703960 env[1198]: time="2025-07-10T00:35:51.703966021Z" level=info msg="cleaning up dead shim" Jul 10 00:35:51.711709 env[1198]: time="2025-07-10T00:35:51.711655365Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2357 runtime=io.containerd.runc.v2\n" Jul 10 00:35:51.855629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33-rootfs.mount: Deactivated successfully. Jul 10 00:35:52.166067 kubelet[1917]: E0710 00:35:52.166003 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:52.171504 env[1198]: time="2025-07-10T00:35:52.171442542Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:35:52.189655 env[1198]: time="2025-07-10T00:35:52.189594906Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\"" Jul 10 00:35:52.190231 env[1198]: time="2025-07-10T00:35:52.190186655Z" level=info msg="StartContainer for \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\"" Jul 10 00:35:52.207125 systemd[1]: Started cri-containerd-c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022.scope. Jul 10 00:35:52.231429 env[1198]: time="2025-07-10T00:35:52.231337358Z" level=info msg="StartContainer for \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\" returns successfully" Jul 10 00:35:52.242492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:35:52.242773 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:35:52.242941 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:35:52.244639 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:35:52.245733 systemd[1]: cri-containerd-c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022.scope: Deactivated successfully. Jul 10 00:35:52.254202 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:35:52.272208 env[1198]: time="2025-07-10T00:35:52.272151478Z" level=info msg="shim disconnected" id=c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022 Jul 10 00:35:52.272208 env[1198]: time="2025-07-10T00:35:52.272205783Z" level=warning msg="cleaning up after shim disconnected" id=c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022 namespace=k8s.io Jul 10 00:35:52.272466 env[1198]: time="2025-07-10T00:35:52.272219764Z" level=info msg="cleaning up dead shim" Jul 10 00:35:52.279583 env[1198]: time="2025-07-10T00:35:52.279525590Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Jul 10 00:35:52.855837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022-rootfs.mount: Deactivated successfully. Jul 10 00:35:53.002483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026446761.mount: Deactivated successfully. Jul 10 00:35:53.169663 kubelet[1917]: E0710 00:35:53.169272 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:53.229830 env[1198]: time="2025-07-10T00:35:53.229771287Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:35:53.262512 env[1198]: time="2025-07-10T00:35:53.262432804Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\"" Jul 10 00:35:53.263146 env[1198]: time="2025-07-10T00:35:53.263092643Z" level=info msg="StartContainer for \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\"" Jul 10 00:35:53.281269 systemd[1]: Started cri-containerd-e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56.scope. Jul 10 00:35:53.306761 env[1198]: time="2025-07-10T00:35:53.306690926Z" level=info msg="StartContainer for \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\" returns successfully" Jul 10 00:35:53.307717 systemd[1]: cri-containerd-e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56.scope: Deactivated successfully. Jul 10 00:35:53.353873 env[1198]: time="2025-07-10T00:35:53.353807347Z" level=info msg="shim disconnected" id=e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56 Jul 10 00:35:53.353873 env[1198]: time="2025-07-10T00:35:53.353861433Z" level=warning msg="cleaning up after shim disconnected" id=e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56 namespace=k8s.io Jul 10 00:35:53.353873 env[1198]: time="2025-07-10T00:35:53.353870402Z" level=info msg="cleaning up dead shim" Jul 10 00:35:53.363048 env[1198]: time="2025-07-10T00:35:53.362988825Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2484 runtime=io.containerd.runc.v2\n" Jul 10 00:35:54.172404 kubelet[1917]: E0710 00:35:54.172342 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:54.429252 env[1198]: time="2025-07-10T00:35:54.429089172Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:35:54.463309 env[1198]: time="2025-07-10T00:35:54.463219332Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\"" Jul 10 00:35:54.463958 env[1198]: time="2025-07-10T00:35:54.463905652Z" level=info msg="StartContainer for \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\"" Jul 10 00:35:54.473124 env[1198]: time="2025-07-10T00:35:54.473084885Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:54.475468 env[1198]: time="2025-07-10T00:35:54.475414139Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:54.477004 env[1198]: time="2025-07-10T00:35:54.476983788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:54.477664 env[1198]: time="2025-07-10T00:35:54.477639793Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:35:54.483049 systemd[1]: Started cri-containerd-b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275.scope. Jul 10 00:35:54.485546 env[1198]: time="2025-07-10T00:35:54.485514943Z" level=info msg="CreateContainer within sandbox \"5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:35:54.501701 env[1198]: time="2025-07-10T00:35:54.501031113Z" level=info msg="CreateContainer within sandbox \"5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\"" Jul 10 00:35:54.501967 env[1198]: time="2025-07-10T00:35:54.501914392Z" level=info msg="StartContainer for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\"" Jul 10 00:35:54.510080 systemd[1]: cri-containerd-b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275.scope: Deactivated successfully. Jul 10 00:35:54.513589 env[1198]: time="2025-07-10T00:35:54.513526990Z" level=info msg="StartContainer for \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\" returns successfully" Jul 10 00:35:54.522096 systemd[1]: Started cri-containerd-ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21.scope. Jul 10 00:35:54.785031 env[1198]: time="2025-07-10T00:35:54.784874425Z" level=info msg="StartContainer for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" returns successfully" Jul 10 00:35:54.787693 env[1198]: time="2025-07-10T00:35:54.787618642Z" level=info msg="shim disconnected" id=b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275 Jul 10 00:35:54.787693 env[1198]: time="2025-07-10T00:35:54.787673068Z" level=warning msg="cleaning up after shim disconnected" id=b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275 namespace=k8s.io Jul 10 00:35:54.787693 env[1198]: time="2025-07-10T00:35:54.787689092Z" level=info msg="cleaning up dead shim" Jul 10 00:35:54.796871 env[1198]: time="2025-07-10T00:35:54.796805320Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Jul 10 00:35:54.856689 systemd[1]: run-containerd-runc-k8s.io-b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275-runc.sjvkHm.mount: Deactivated successfully. Jul 10 00:35:54.856774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275-rootfs.mount: Deactivated successfully. Jul 10 00:35:55.174692 kubelet[1917]: E0710 00:35:55.174657 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:55.176797 kubelet[1917]: E0710 00:35:55.176778 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:55.367042 env[1198]: time="2025-07-10T00:35:55.366979101Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:35:56.064217 env[1198]: time="2025-07-10T00:35:56.064126423Z" level=info msg="CreateContainer within sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\"" Jul 10 00:35:56.065169 env[1198]: time="2025-07-10T00:35:56.065085606Z" level=info msg="StartContainer for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\"" Jul 10 00:35:56.099679 systemd[1]: Started cri-containerd-da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6.scope. Jul 10 00:35:56.174765 env[1198]: time="2025-07-10T00:35:56.174698878Z" level=info msg="StartContainer for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" returns successfully" Jul 10 00:35:56.180355 kubelet[1917]: E0710 00:35:56.180254 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.354329 kubelet[1917]: I0710 00:35:56.353319 1917 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:35:56.380026 kubelet[1917]: I0710 00:35:56.379944 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fln79" podStartSLOduration=2.77346884 podStartE2EDuration="18.379921107s" podCreationTimestamp="2025-07-10 00:35:38 +0000 UTC" firstStartedPulling="2025-07-10 00:35:38.87269391 +0000 UTC m=+6.178896985" lastFinishedPulling="2025-07-10 00:35:54.479146167 +0000 UTC m=+21.785349252" observedRunningTime="2025-07-10 00:35:56.055064029 +0000 UTC m=+23.361267134" watchObservedRunningTime="2025-07-10 00:35:56.379921107 +0000 UTC m=+23.686124202" Jul 10 00:35:56.673912 systemd[1]: Created slice kubepods-burstable-pod2bd7de4b_72f4_4534_845a_cd892c3cc98c.slice. Jul 10 00:35:56.679545 systemd[1]: Created slice kubepods-burstable-pod9c73933e_92b3_4af4_859d_e037a10ce8e1.slice. Jul 10 00:35:56.773058 kubelet[1917]: I0710 00:35:56.772992 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bj88\" (UniqueName: \"kubernetes.io/projected/2bd7de4b-72f4-4534-845a-cd892c3cc98c-kube-api-access-5bj88\") pod \"coredns-674b8bbfcf-c8lgr\" (UID: \"2bd7de4b-72f4-4534-845a-cd892c3cc98c\") " pod="kube-system/coredns-674b8bbfcf-c8lgr" Jul 10 00:35:56.773058 kubelet[1917]: I0710 00:35:56.773032 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c73933e-92b3-4af4-859d-e037a10ce8e1-config-volume\") pod \"coredns-674b8bbfcf-cc9n2\" (UID: \"9c73933e-92b3-4af4-859d-e037a10ce8e1\") " pod="kube-system/coredns-674b8bbfcf-cc9n2" Jul 10 00:35:56.773058 kubelet[1917]: I0710 00:35:56.773051 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbgq\" (UniqueName: \"kubernetes.io/projected/9c73933e-92b3-4af4-859d-e037a10ce8e1-kube-api-access-rnbgq\") pod \"coredns-674b8bbfcf-cc9n2\" (UID: \"9c73933e-92b3-4af4-859d-e037a10ce8e1\") " pod="kube-system/coredns-674b8bbfcf-cc9n2" Jul 10 00:35:56.773058 kubelet[1917]: I0710 00:35:56.773073 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd7de4b-72f4-4534-845a-cd892c3cc98c-config-volume\") pod \"coredns-674b8bbfcf-c8lgr\" (UID: \"2bd7de4b-72f4-4534-845a-cd892c3cc98c\") " pod="kube-system/coredns-674b8bbfcf-c8lgr" Jul 10 00:35:56.915864 systemd[1]: run-containerd-runc-k8s.io-da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6-runc.sCoNHJ.mount: Deactivated successfully. Jul 10 00:35:56.982191 kubelet[1917]: E0710 00:35:56.981809 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.982191 kubelet[1917]: E0710 00:35:56.982013 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.987076 env[1198]: time="2025-07-10T00:35:56.987013425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8lgr,Uid:2bd7de4b-72f4-4534-845a-cd892c3cc98c,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:56.987843 env[1198]: time="2025-07-10T00:35:56.987793420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cc9n2,Uid:9c73933e-92b3-4af4-859d-e037a10ce8e1,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:57.182126 kubelet[1917]: E0710 00:35:57.182086 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:57.561741 kubelet[1917]: I0710 00:35:57.561669 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sbtmz" podStartSLOduration=7.515845474 podStartE2EDuration="19.561646489s" podCreationTimestamp="2025-07-10 00:35:38 +0000 UTC" firstStartedPulling="2025-07-10 00:35:38.793939991 +0000 UTC m=+6.100143076" lastFinishedPulling="2025-07-10 00:35:50.839740996 +0000 UTC m=+18.145944091" observedRunningTime="2025-07-10 00:35:57.561644936 +0000 UTC m=+24.867848051" watchObservedRunningTime="2025-07-10 00:35:57.561646489 +0000 UTC m=+24.867849574" Jul 10 00:35:58.345181 systemd-networkd[1017]: cilium_host: Link UP Jul 10 00:35:58.345305 systemd-networkd[1017]: cilium_net: Link UP Jul 10 00:35:58.347355 systemd-networkd[1017]: cilium_net: Gained carrier Jul 10 00:35:58.348878 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 10 00:35:58.348948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:35:58.349062 systemd-networkd[1017]: cilium_host: Gained carrier Jul 10 00:35:58.434711 systemd-networkd[1017]: cilium_vxlan: Link UP Jul 10 00:35:58.434724 systemd-networkd[1017]: cilium_vxlan: Gained carrier Jul 10 00:35:58.670408 kernel: NET: Registered PF_ALG protocol family Jul 10 00:35:58.719819 kubelet[1917]: E0710 00:35:58.719775 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:58.808611 systemd-networkd[1017]: cilium_net: Gained IPv6LL Jul 10 00:35:59.310451 systemd-networkd[1017]: lxc_health: Link UP Jul 10 00:35:59.319434 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:35:59.319451 systemd-networkd[1017]: lxc_health: Gained carrier Jul 10 00:35:59.344563 systemd-networkd[1017]: cilium_host: Gained IPv6LL Jul 10 00:35:59.893869 systemd-networkd[1017]: lxcf8fadab56a00: Link UP Jul 10 00:35:59.902449 kernel: eth0: renamed from tmp21eb1 Jul 10 00:35:59.918700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:35:59.918831 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf8fadab56a00: link becomes ready Jul 10 00:35:59.918878 systemd-networkd[1017]: lxcf8fadab56a00: Gained carrier Jul 10 00:35:59.958000 systemd-networkd[1017]: lxc0a7c54986a2d: Link UP Jul 10 00:35:59.970412 kernel: eth0: renamed from tmp071ab Jul 10 00:35:59.976404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0a7c54986a2d: link becomes ready Jul 10 00:35:59.976691 systemd-networkd[1017]: lxc0a7c54986a2d: Gained carrier Jul 10 00:36:00.179994 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:52262.service. Jul 10 00:36:00.217638 sshd[3128]: Accepted publickey for core from 10.0.0.1 port 52262 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:00.219026 sshd[3128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:00.223470 systemd-logind[1186]: New session 6 of user core. Jul 10 00:36:00.224330 systemd[1]: Started session-6.scope. Jul 10 00:36:00.305601 systemd-networkd[1017]: cilium_vxlan: Gained IPv6LL Jul 10 00:36:00.514547 sshd[3128]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:00.517489 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:52262.service: Deactivated successfully. Jul 10 00:36:00.518354 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:36:00.519226 systemd-logind[1186]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:36:00.519951 systemd-logind[1186]: Removed session 6. Jul 10 00:36:00.722070 kubelet[1917]: E0710 00:36:00.722016 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:00.944547 systemd-networkd[1017]: lxc_health: Gained IPv6LL Jul 10 00:36:01.190205 kubelet[1917]: E0710 00:36:01.190142 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:01.328619 systemd-networkd[1017]: lxcf8fadab56a00: Gained IPv6LL Jul 10 00:36:01.904546 systemd-networkd[1017]: lxc0a7c54986a2d: Gained IPv6LL Jul 10 00:36:02.191840 kubelet[1917]: E0710 00:36:02.191710 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:03.403735 env[1198]: time="2025-07-10T00:36:03.402936655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:03.403735 env[1198]: time="2025-07-10T00:36:03.402997150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:03.403735 env[1198]: time="2025-07-10T00:36:03.403011069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:03.403735 env[1198]: time="2025-07-10T00:36:03.403198245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21eb19974c58a63b68b4df029da7ab2807eda5cd37925e0825bdc29ac8c4eeef pid=3164 runtime=io.containerd.runc.v2 Jul 10 00:36:03.407479 env[1198]: time="2025-07-10T00:36:03.407410603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:36:03.407479 env[1198]: time="2025-07-10T00:36:03.407478773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:36:03.407623 env[1198]: time="2025-07-10T00:36:03.407508174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:36:03.407664 env[1198]: time="2025-07-10T00:36:03.407631979Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/071ab0d2180340d3ecf6dcf9fce2ac6ce7ae8155a0a63de8ffc7a966ab02be4b pid=3182 runtime=io.containerd.runc.v2 Jul 10 00:36:03.423675 systemd[1]: run-containerd-runc-k8s.io-071ab0d2180340d3ecf6dcf9fce2ac6ce7ae8155a0a63de8ffc7a966ab02be4b-runc.uUSMPP.mount: Deactivated successfully. Jul 10 00:36:03.426935 systemd[1]: Started cri-containerd-071ab0d2180340d3ecf6dcf9fce2ac6ce7ae8155a0a63de8ffc7a966ab02be4b.scope. Jul 10 00:36:03.428016 systemd[1]: Started cri-containerd-21eb19974c58a63b68b4df029da7ab2807eda5cd37925e0825bdc29ac8c4eeef.scope. Jul 10 00:36:03.439452 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:03.440604 systemd-resolved[1136]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:36:03.464550 env[1198]: time="2025-07-10T00:36:03.464507891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8lgr,Uid:2bd7de4b-72f4-4534-845a-cd892c3cc98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"21eb19974c58a63b68b4df029da7ab2807eda5cd37925e0825bdc29ac8c4eeef\"" Jul 10 00:36:03.466493 kubelet[1917]: E0710 00:36:03.465466 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:03.467678 env[1198]: time="2025-07-10T00:36:03.467644590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cc9n2,Uid:9c73933e-92b3-4af4-859d-e037a10ce8e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"071ab0d2180340d3ecf6dcf9fce2ac6ce7ae8155a0a63de8ffc7a966ab02be4b\"" Jul 10 00:36:03.468904 kubelet[1917]: E0710 00:36:03.468769 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:03.473129 env[1198]: time="2025-07-10T00:36:03.473090522Z" level=info msg="CreateContainer within sandbox \"21eb19974c58a63b68b4df029da7ab2807eda5cd37925e0825bdc29ac8c4eeef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:36:03.474918 env[1198]: time="2025-07-10T00:36:03.474878059Z" level=info msg="CreateContainer within sandbox \"071ab0d2180340d3ecf6dcf9fce2ac6ce7ae8155a0a63de8ffc7a966ab02be4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:36:03.492875 env[1198]: time="2025-07-10T00:36:03.492799501Z" level=info msg="CreateContainer within sandbox \"071ab0d2180340d3ecf6dcf9fce2ac6ce7ae8155a0a63de8ffc7a966ab02be4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a72a65c629b2ca39c9c10f61be62ab5d0fdff36fa1a77dd12adf9d1dc204999\"" Jul 10 00:36:03.494787 env[1198]: time="2025-07-10T00:36:03.493805886Z" level=info msg="StartContainer for \"0a72a65c629b2ca39c9c10f61be62ab5d0fdff36fa1a77dd12adf9d1dc204999\"" Jul 10 00:36:03.508712 env[1198]: time="2025-07-10T00:36:03.506532514Z" level=info msg="CreateContainer within sandbox \"21eb19974c58a63b68b4df029da7ab2807eda5cd37925e0825bdc29ac8c4eeef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c22b680fa29657f5e61444f229ad603d8aab50758a373e5d012c4dec174e87df\"" Jul 10 00:36:03.508712 env[1198]: time="2025-07-10T00:36:03.507686925Z" level=info msg="StartContainer for \"c22b680fa29657f5e61444f229ad603d8aab50758a373e5d012c4dec174e87df\"" Jul 10 00:36:03.514461 systemd[1]: Started cri-containerd-0a72a65c629b2ca39c9c10f61be62ab5d0fdff36fa1a77dd12adf9d1dc204999.scope. Jul 10 00:36:03.536268 systemd[1]: Started cri-containerd-c22b680fa29657f5e61444f229ad603d8aab50758a373e5d012c4dec174e87df.scope. Jul 10 00:36:03.553196 env[1198]: time="2025-07-10T00:36:03.553125599Z" level=info msg="StartContainer for \"0a72a65c629b2ca39c9c10f61be62ab5d0fdff36fa1a77dd12adf9d1dc204999\" returns successfully" Jul 10 00:36:03.567152 env[1198]: time="2025-07-10T00:36:03.567091371Z" level=info msg="StartContainer for \"c22b680fa29657f5e61444f229ad603d8aab50758a373e5d012c4dec174e87df\" returns successfully" Jul 10 00:36:04.196281 kubelet[1917]: E0710 00:36:04.196229 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:04.197844 kubelet[1917]: E0710 00:36:04.197798 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:04.207394 kubelet[1917]: I0710 00:36:04.207291 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cc9n2" podStartSLOduration=26.207270996 podStartE2EDuration="26.207270996s" podCreationTimestamp="2025-07-10 00:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:36:04.206986009 +0000 UTC m=+31.513189094" watchObservedRunningTime="2025-07-10 00:36:04.207270996 +0000 UTC m=+31.513474081" Jul 10 00:36:05.199921 kubelet[1917]: E0710 00:36:05.199882 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:05.200360 kubelet[1917]: E0710 00:36:05.200091 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:05.520459 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:52270.service. Jul 10 00:36:05.556873 sshd[3324]: Accepted publickey for core from 10.0.0.1 port 52270 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:05.558133 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:05.561771 systemd-logind[1186]: New session 7 of user core. Jul 10 00:36:05.562828 systemd[1]: Started session-7.scope. Jul 10 00:36:05.696955 sshd[3324]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:05.699634 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:52270.service: Deactivated successfully. Jul 10 00:36:05.700555 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:36:05.701086 systemd-logind[1186]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:36:05.701889 systemd-logind[1186]: Removed session 7. Jul 10 00:36:06.201860 kubelet[1917]: E0710 00:36:06.201822 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:06.202281 kubelet[1917]: E0710 00:36:06.201924 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:10.704473 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:36190.service. Jul 10 00:36:10.741718 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 36190 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:10.743346 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:10.747835 systemd-logind[1186]: New session 8 of user core. Jul 10 00:36:10.748710 systemd[1]: Started session-8.scope. Jul 10 00:36:10.875518 sshd[3341]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:10.877837 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:36190.service: Deactivated successfully. Jul 10 00:36:10.878743 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:36:10.879332 systemd-logind[1186]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:36:10.880292 systemd-logind[1186]: Removed session 8. Jul 10 00:36:15.880842 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:36194.service. Jul 10 00:36:15.920709 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 36194 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:15.921810 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:15.925298 systemd-logind[1186]: New session 9 of user core. Jul 10 00:36:15.926130 systemd[1]: Started session-9.scope. Jul 10 00:36:16.032742 sshd[3356]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:16.035059 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:36194.service: Deactivated successfully. Jul 10 00:36:16.035794 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:36:16.036512 systemd-logind[1186]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:36:16.037161 systemd-logind[1186]: Removed session 9. Jul 10 00:36:21.038529 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:33498.service. Jul 10 00:36:21.071394 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 33498 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:21.072692 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:21.076237 systemd-logind[1186]: New session 10 of user core. Jul 10 00:36:21.077066 systemd[1]: Started session-10.scope. Jul 10 00:36:21.201220 sshd[3370]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:21.204982 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:33498.service: Deactivated successfully. Jul 10 00:36:21.205563 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:36:21.206101 systemd-logind[1186]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:36:21.207393 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:33500.service. Jul 10 00:36:21.208363 systemd-logind[1186]: Removed session 10. Jul 10 00:36:21.264151 sshd[3384]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:21.265591 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:21.269952 systemd-logind[1186]: New session 11 of user core. Jul 10 00:36:21.270816 systemd[1]: Started session-11.scope. Jul 10 00:36:21.549922 sshd[3384]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:21.552897 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:33500.service: Deactivated successfully. Jul 10 00:36:21.553587 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:36:21.554349 systemd-logind[1186]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:36:21.555546 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:33508.service. Jul 10 00:36:21.556348 systemd-logind[1186]: Removed session 11. Jul 10 00:36:21.588649 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 33508 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:21.589844 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:21.593150 systemd-logind[1186]: New session 12 of user core. Jul 10 00:36:21.593975 systemd[1]: Started session-12.scope. Jul 10 00:36:21.712916 sshd[3396]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:21.716469 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:33508.service: Deactivated successfully. Jul 10 00:36:21.717335 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:36:21.718123 systemd-logind[1186]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:36:21.719144 systemd-logind[1186]: Removed session 12. Jul 10 00:36:26.717468 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:37922.service. Jul 10 00:36:26.750927 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 37922 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:26.752158 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:26.755385 systemd-logind[1186]: New session 13 of user core. Jul 10 00:36:26.756456 systemd[1]: Started session-13.scope. Jul 10 00:36:26.869658 sshd[3410]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:26.872127 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:37922.service: Deactivated successfully. Jul 10 00:36:26.873023 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:36:26.873876 systemd-logind[1186]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:36:26.874772 systemd-logind[1186]: Removed session 13. Jul 10 00:36:31.875108 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:37932.service. Jul 10 00:36:31.907904 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 37932 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:31.909339 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:31.913602 systemd-logind[1186]: New session 14 of user core. Jul 10 00:36:31.914429 systemd[1]: Started session-14.scope. Jul 10 00:36:32.035297 sshd[3423]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:32.038796 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:37932.service: Deactivated successfully. Jul 10 00:36:32.039641 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:36:32.040286 systemd-logind[1186]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:36:32.041144 systemd-logind[1186]: Removed session 14. Jul 10 00:36:37.039627 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:36310.service. Jul 10 00:36:37.140144 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 36310 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:37.141317 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:37.144639 systemd-logind[1186]: New session 15 of user core. Jul 10 00:36:37.145457 systemd[1]: Started session-15.scope. Jul 10 00:36:37.338659 sshd[3439]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:37.341684 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:36310.service: Deactivated successfully. Jul 10 00:36:37.342330 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:36:37.342929 systemd-logind[1186]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:36:37.344182 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:36312.service. Jul 10 00:36:37.345463 systemd-logind[1186]: Removed session 15. Jul 10 00:36:37.379188 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 36312 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:37.380519 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:37.384044 systemd-logind[1186]: New session 16 of user core. Jul 10 00:36:37.384829 systemd[1]: Started session-16.scope. Jul 10 00:36:38.191846 sshd[3455]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:38.196331 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:36328.service. Jul 10 00:36:38.198651 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:36312.service: Deactivated successfully. Jul 10 00:36:38.199522 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:36:38.200136 systemd-logind[1186]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:36:38.200863 systemd-logind[1186]: Removed session 16. Jul 10 00:36:38.230828 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 36328 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:38.231999 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:38.235248 systemd-logind[1186]: New session 17 of user core. Jul 10 00:36:38.236245 systemd[1]: Started session-17.scope. Jul 10 00:36:39.006321 sshd[3466]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:39.011264 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:36328.service: Deactivated successfully. Jul 10 00:36:39.011821 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:36:39.013085 systemd-logind[1186]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:36:39.015142 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:36332.service. Jul 10 00:36:39.016831 systemd-logind[1186]: Removed session 17. Jul 10 00:36:39.054823 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 36332 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:39.055977 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:39.059679 systemd-logind[1186]: New session 18 of user core. Jul 10 00:36:39.060715 systemd[1]: Started session-18.scope. Jul 10 00:36:39.315203 sshd[3487]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:39.319727 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:36346.service. Jul 10 00:36:39.320612 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:36332.service: Deactivated successfully. Jul 10 00:36:39.321400 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:36:39.322113 systemd-logind[1186]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:36:39.324362 systemd-logind[1186]: Removed session 18. Jul 10 00:36:39.356479 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 36346 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:39.357768 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:39.361667 systemd-logind[1186]: New session 19 of user core. Jul 10 00:36:39.362759 systemd[1]: Started session-19.scope. Jul 10 00:36:39.476489 sshd[3499]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:39.478920 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:36346.service: Deactivated successfully. Jul 10 00:36:39.479729 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:36:39.480447 systemd-logind[1186]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:36:39.481309 systemd-logind[1186]: Removed session 19. Jul 10 00:36:44.481260 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:36358.service. Jul 10 00:36:44.513451 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 36358 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:44.514693 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:44.518047 systemd-logind[1186]: New session 20 of user core. Jul 10 00:36:44.519085 systemd[1]: Started session-20.scope. Jul 10 00:36:44.619185 sshd[3514]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:44.621799 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:36358.service: Deactivated successfully. Jul 10 00:36:44.622634 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:36:44.623346 systemd-logind[1186]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:36:44.624198 systemd-logind[1186]: Removed session 20. Jul 10 00:36:49.116454 kubelet[1917]: E0710 00:36:49.116364 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:49.624099 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:41646.service. Jul 10 00:36:49.660337 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 41646 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:49.661700 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:49.665183 systemd-logind[1186]: New session 21 of user core. Jul 10 00:36:49.666054 systemd[1]: Started session-21.scope. Jul 10 00:36:49.768341 sshd[3529]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:49.770580 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:41646.service: Deactivated successfully. Jul 10 00:36:49.771269 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:36:49.771754 systemd-logind[1186]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:36:49.772453 systemd-logind[1186]: Removed session 21. Jul 10 00:36:53.117456 kubelet[1917]: E0710 00:36:53.117398 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:54.773674 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:41658.service. Jul 10 00:36:54.805846 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 41658 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:54.806772 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:54.810188 systemd-logind[1186]: New session 22 of user core. Jul 10 00:36:54.811306 systemd[1]: Started session-22.scope. Jul 10 00:36:54.912296 sshd[3542]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:54.915990 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:41658.service: Deactivated successfully. Jul 10 00:36:54.916600 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:36:54.917109 systemd-logind[1186]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:36:54.918138 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:41672.service. Jul 10 00:36:54.918844 systemd-logind[1186]: Removed session 22. Jul 10 00:36:54.949749 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 41672 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:54.950731 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:54.954429 systemd-logind[1186]: New session 23 of user core. Jul 10 00:36:54.955219 systemd[1]: Started session-23.scope. Jul 10 00:36:56.277214 kubelet[1917]: I0710 00:36:56.277131 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c8lgr" podStartSLOduration=78.277107744 podStartE2EDuration="1m18.277107744s" podCreationTimestamp="2025-07-10 00:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:36:04.232417099 +0000 UTC m=+31.538620184" watchObservedRunningTime="2025-07-10 00:36:56.277107744 +0000 UTC m=+83.583310829" Jul 10 00:36:56.289179 env[1198]: time="2025-07-10T00:36:56.289124801Z" level=info msg="StopContainer for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" with timeout 30 (s)" Jul 10 00:36:56.289570 env[1198]: time="2025-07-10T00:36:56.289506533Z" level=info msg="Stop container \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" with signal terminated" Jul 10 00:36:56.296406 systemd[1]: run-containerd-runc-k8s.io-da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6-runc.VkxqCo.mount: Deactivated successfully. Jul 10 00:36:56.301107 systemd[1]: cri-containerd-ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21.scope: Deactivated successfully. Jul 10 00:36:56.313970 env[1198]: time="2025-07-10T00:36:56.313908423Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:36:56.319964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21-rootfs.mount: Deactivated successfully. Jul 10 00:36:56.321795 env[1198]: time="2025-07-10T00:36:56.321765953Z" level=info msg="StopContainer for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" with timeout 2 (s)" Jul 10 00:36:56.322002 env[1198]: time="2025-07-10T00:36:56.321980438Z" level=info msg="Stop container \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" with signal terminated" Jul 10 00:36:56.328219 systemd-networkd[1017]: lxc_health: Link DOWN Jul 10 00:36:56.328226 systemd-networkd[1017]: lxc_health: Lost carrier Jul 10 00:36:56.329400 env[1198]: time="2025-07-10T00:36:56.329350276Z" level=info msg="shim disconnected" id=ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21 Jul 10 00:36:56.329470 env[1198]: time="2025-07-10T00:36:56.329406112Z" level=warning msg="cleaning up after shim disconnected" id=ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21 namespace=k8s.io Jul 10 00:36:56.329470 env[1198]: time="2025-07-10T00:36:56.329414889Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.335328 env[1198]: time="2025-07-10T00:36:56.335297274Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3609 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.337815 env[1198]: time="2025-07-10T00:36:56.337787404Z" level=info msg="StopContainer for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" returns successfully" Jul 10 00:36:56.338520 env[1198]: time="2025-07-10T00:36:56.338477599Z" level=info msg="StopPodSandbox for \"5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844\"" Jul 10 00:36:56.338688 env[1198]: time="2025-07-10T00:36:56.338555446Z" level=info msg="Container to stop \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.340283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844-shm.mount: Deactivated successfully. Jul 10 00:36:56.345036 systemd[1]: cri-containerd-5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844.scope: Deactivated successfully. Jul 10 00:36:56.364752 systemd[1]: cri-containerd-da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6.scope: Deactivated successfully. Jul 10 00:36:56.365022 systemd[1]: cri-containerd-da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6.scope: Consumed 6.438s CPU time. Jul 10 00:36:56.375462 env[1198]: time="2025-07-10T00:36:56.375414715Z" level=info msg="shim disconnected" id=5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844 Jul 10 00:36:56.376195 env[1198]: time="2025-07-10T00:36:56.376174072Z" level=warning msg="cleaning up after shim disconnected" id=5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844 namespace=k8s.io Jul 10 00:36:56.376280 env[1198]: time="2025-07-10T00:36:56.376260144Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.383038 env[1198]: time="2025-07-10T00:36:56.382984321Z" level=info msg="shim disconnected" id=da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6 Jul 10 00:36:56.383038 env[1198]: time="2025-07-10T00:36:56.383037782Z" level=warning msg="cleaning up after shim disconnected" id=da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6 namespace=k8s.io Jul 10 00:36:56.383183 env[1198]: time="2025-07-10T00:36:56.383046258Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.383408 env[1198]: time="2025-07-10T00:36:56.383353530Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.383696 env[1198]: time="2025-07-10T00:36:56.383659237Z" level=info msg="TearDown network for sandbox \"5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844\" successfully" Jul 10 00:36:56.383696 env[1198]: time="2025-07-10T00:36:56.383683564Z" level=info msg="StopPodSandbox for \"5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844\" returns successfully" Jul 10 00:36:56.393726 env[1198]: time="2025-07-10T00:36:56.391060395Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3667 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.393726 env[1198]: time="2025-07-10T00:36:56.393515578Z" level=info msg="StopContainer for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" returns successfully" Jul 10 00:36:56.393852 env[1198]: time="2025-07-10T00:36:56.393792732Z" level=info msg="StopPodSandbox for \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\"" Jul 10 00:36:56.393852 env[1198]: time="2025-07-10T00:36:56.393842937Z" level=info msg="Container to stop \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.393907 env[1198]: time="2025-07-10T00:36:56.393857344Z" level=info msg="Container to stop \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.393907 env[1198]: time="2025-07-10T00:36:56.393868285Z" level=info msg="Container to stop \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.393907 env[1198]: time="2025-07-10T00:36:56.393877813Z" level=info msg="Container to stop \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.393907 env[1198]: time="2025-07-10T00:36:56.393886550Z" level=info msg="Container to stop \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.398968 systemd[1]: cri-containerd-8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa.scope: Deactivated successfully. Jul 10 00:36:56.514585 env[1198]: time="2025-07-10T00:36:56.514532148Z" level=info msg="shim disconnected" id=8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa Jul 10 00:36:56.514585 env[1198]: time="2025-07-10T00:36:56.514576934Z" level=warning msg="cleaning up after shim disconnected" id=8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa namespace=k8s.io Jul 10 00:36:56.514585 env[1198]: time="2025-07-10T00:36:56.514584908Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.520973 env[1198]: time="2025-07-10T00:36:56.520925229Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.521267 env[1198]: time="2025-07-10T00:36:56.521243151Z" level=info msg="TearDown network for sandbox \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" successfully" Jul 10 00:36:56.521298 env[1198]: time="2025-07-10T00:36:56.521266815Z" level=info msg="StopPodSandbox for \"8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa\" returns successfully" Jul 10 00:36:56.536653 kubelet[1917]: I0710 00:36:56.535666 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v74kl\" (UniqueName: \"kubernetes.io/projected/62a42ec5-d39b-4b81-9a19-4de4895106ca-kube-api-access-v74kl\") pod \"62a42ec5-d39b-4b81-9a19-4de4895106ca\" (UID: \"62a42ec5-d39b-4b81-9a19-4de4895106ca\") " Jul 10 00:36:56.536653 kubelet[1917]: I0710 00:36:56.535715 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62a42ec5-d39b-4b81-9a19-4de4895106ca-cilium-config-path\") pod \"62a42ec5-d39b-4b81-9a19-4de4895106ca\" (UID: \"62a42ec5-d39b-4b81-9a19-4de4895106ca\") " Jul 10 00:36:56.538741 kubelet[1917]: I0710 00:36:56.538699 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62a42ec5-d39b-4b81-9a19-4de4895106ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "62a42ec5-d39b-4b81-9a19-4de4895106ca" (UID: "62a42ec5-d39b-4b81-9a19-4de4895106ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:36:56.540925 kubelet[1917]: I0710 00:36:56.540892 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a42ec5-d39b-4b81-9a19-4de4895106ca-kube-api-access-v74kl" (OuterVolumeSpecName: "kube-api-access-v74kl") pod "62a42ec5-d39b-4b81-9a19-4de4895106ca" (UID: "62a42ec5-d39b-4b81-9a19-4de4895106ca"). InnerVolumeSpecName "kube-api-access-v74kl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:36:56.636262 kubelet[1917]: I0710 00:36:56.636202 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-hostproc\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636262 kubelet[1917]: I0710 00:36:56.636243 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-xtables-lock\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636262 kubelet[1917]: I0710 00:36:56.636272 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc945\" (UniqueName: \"kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-kube-api-access-zc945\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636538 kubelet[1917]: I0710 00:36:56.636291 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-net\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636538 kubelet[1917]: I0710 00:36:56.636309 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cni-path\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636538 kubelet[1917]: I0710 00:36:56.636318 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-hostproc" (OuterVolumeSpecName: "hostproc") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.636538 kubelet[1917]: I0710 00:36:56.636329 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.636538 kubelet[1917]: I0710 00:36:56.636328 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-lib-modules\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636664 kubelet[1917]: I0710 00:36:56.636358 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.636664 kubelet[1917]: I0710 00:36:56.636363 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.636664 kubelet[1917]: I0710 00:36:56.636400 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/169dd06f-d173-4dfe-8294-687335d47d83-clustermesh-secrets\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636664 kubelet[1917]: I0710 00:36:56.636409 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cni-path" (OuterVolumeSpecName: "cni-path") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.636664 kubelet[1917]: I0710 00:36:56.636418 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169dd06f-d173-4dfe-8294-687335d47d83-cilium-config-path\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636781 kubelet[1917]: I0710 00:36:56.636431 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-etc-cni-netd\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636781 kubelet[1917]: I0710 00:36:56.636446 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-cgroup\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636781 kubelet[1917]: I0710 00:36:56.636457 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-bpf-maps\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636781 kubelet[1917]: I0710 00:36:56.636469 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-run\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636781 kubelet[1917]: I0710 00:36:56.636484 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-hubble-tls\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636781 kubelet[1917]: I0710 00:36:56.636497 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-kernel\") pod \"169dd06f-d173-4dfe-8294-687335d47d83\" (UID: \"169dd06f-d173-4dfe-8294-687335d47d83\") " Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636530 1917 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636540 1917 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636549 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636557 1917 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636563 1917 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636571 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62a42ec5-d39b-4b81-9a19-4de4895106ca-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.636959 kubelet[1917]: I0710 00:36:56.636578 1917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v74kl\" (UniqueName: \"kubernetes.io/projected/62a42ec5-d39b-4b81-9a19-4de4895106ca-kube-api-access-v74kl\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.637142 kubelet[1917]: I0710 00:36:56.636595 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.637683 kubelet[1917]: I0710 00:36:56.637249 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.637683 kubelet[1917]: I0710 00:36:56.637317 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.637683 kubelet[1917]: I0710 00:36:56.637330 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.637683 kubelet[1917]: I0710 00:36:56.637345 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:56.638496 kubelet[1917]: I0710 00:36:56.638463 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169dd06f-d173-4dfe-8294-687335d47d83-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:36:56.638949 kubelet[1917]: I0710 00:36:56.638922 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169dd06f-d173-4dfe-8294-687335d47d83-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:36:56.638949 kubelet[1917]: I0710 00:36:56.638927 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-kube-api-access-zc945" (OuterVolumeSpecName: "kube-api-access-zc945") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "kube-api-access-zc945". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:36:56.640418 kubelet[1917]: I0710 00:36:56.640395 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "169dd06f-d173-4dfe-8294-687335d47d83" (UID: "169dd06f-d173-4dfe-8294-687335d47d83"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:36:56.737758 kubelet[1917]: I0710 00:36:56.737726 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737758 kubelet[1917]: I0710 00:36:56.737749 1917 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737758 kubelet[1917]: I0710 00:36:56.737757 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737758 kubelet[1917]: I0710 00:36:56.737764 1917 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737950 kubelet[1917]: I0710 00:36:56.737772 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737950 kubelet[1917]: I0710 00:36:56.737781 1917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc945\" (UniqueName: \"kubernetes.io/projected/169dd06f-d173-4dfe-8294-687335d47d83-kube-api-access-zc945\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737950 kubelet[1917]: I0710 00:36:56.737789 1917 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/169dd06f-d173-4dfe-8294-687335d47d83-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737950 kubelet[1917]: I0710 00:36:56.737796 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169dd06f-d173-4dfe-8294-687335d47d83-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.737950 kubelet[1917]: I0710 00:36:56.737803 1917 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/169dd06f-d173-4dfe-8294-687335d47d83-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:57.122511 systemd[1]: Removed slice kubepods-burstable-pod169dd06f_d173_4dfe_8294_687335d47d83.slice. Jul 10 00:36:57.122593 systemd[1]: kubepods-burstable-pod169dd06f_d173_4dfe_8294_687335d47d83.slice: Consumed 6.535s CPU time. Jul 10 00:36:57.123466 systemd[1]: Removed slice kubepods-besteffort-pod62a42ec5_d39b_4b81_9a19_4de4895106ca.slice. Jul 10 00:36:57.291421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6-rootfs.mount: Deactivated successfully. Jul 10 00:36:57.291548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f178e6582638e8d5d9a6acee97fabeaed22e7f5274fec7a71b5717d0447e844-rootfs.mount: Deactivated successfully. Jul 10 00:36:57.291621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa-rootfs.mount: Deactivated successfully. Jul 10 00:36:57.291707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b6fb06a6e418d62ee31940532279efd9778741127831f87a47740dfba2133fa-shm.mount: Deactivated successfully. Jul 10 00:36:57.291785 systemd[1]: var-lib-kubelet-pods-62a42ec5\x2dd39b\x2d4b81\x2d9a19\x2d4de4895106ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv74kl.mount: Deactivated successfully. Jul 10 00:36:57.291863 systemd[1]: var-lib-kubelet-pods-169dd06f\x2dd173\x2d4dfe\x2d8294\x2d687335d47d83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzc945.mount: Deactivated successfully. Jul 10 00:36:57.291942 systemd[1]: var-lib-kubelet-pods-169dd06f\x2dd173\x2d4dfe\x2d8294\x2d687335d47d83-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:36:57.292033 systemd[1]: var-lib-kubelet-pods-169dd06f\x2dd173\x2d4dfe\x2d8294\x2d687335d47d83-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:36:57.303616 kubelet[1917]: I0710 00:36:57.303570 1917 scope.go:117] "RemoveContainer" containerID="ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21" Jul 10 00:36:57.305396 env[1198]: time="2025-07-10T00:36:57.305331690Z" level=info msg="RemoveContainer for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\"" Jul 10 00:36:57.311215 env[1198]: time="2025-07-10T00:36:57.311158843Z" level=info msg="RemoveContainer for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" returns successfully" Jul 10 00:36:57.311960 kubelet[1917]: I0710 00:36:57.311870 1917 scope.go:117] "RemoveContainer" containerID="ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21" Jul 10 00:36:57.312267 env[1198]: time="2025-07-10T00:36:57.312137537Z" level=error msg="ContainerStatus for \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\": not found" Jul 10 00:36:57.312830 kubelet[1917]: E0710 00:36:57.312788 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\": not found" containerID="ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21" Jul 10 00:36:57.313105 kubelet[1917]: I0710 00:36:57.313038 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21"} err="failed to get container status \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba63d7ef80890e2ae9b070996c2662e3138b5ed64babf4faf8e3d9c747eaba21\": not found" Jul 10 00:36:57.313243 kubelet[1917]: I0710 00:36:57.313219 1917 scope.go:117] "RemoveContainer" containerID="da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6" Jul 10 00:36:57.315816 env[1198]: time="2025-07-10T00:36:57.315774403Z" level=info msg="RemoveContainer for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\"" Jul 10 00:36:57.320044 env[1198]: time="2025-07-10T00:36:57.319986137Z" level=info msg="RemoveContainer for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" returns successfully" Jul 10 00:36:57.320263 kubelet[1917]: I0710 00:36:57.320194 1917 scope.go:117] "RemoveContainer" containerID="b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275" Jul 10 00:36:57.322153 env[1198]: time="2025-07-10T00:36:57.322117463Z" level=info msg="RemoveContainer for \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\"" Jul 10 00:36:57.325463 env[1198]: time="2025-07-10T00:36:57.325425407Z" level=info msg="RemoveContainer for \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\" returns successfully" Jul 10 00:36:57.325644 kubelet[1917]: I0710 00:36:57.325597 1917 scope.go:117] "RemoveContainer" containerID="e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56" Jul 10 00:36:57.327098 env[1198]: time="2025-07-10T00:36:57.327061003Z" level=info msg="RemoveContainer for \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\"" Jul 10 00:36:57.331700 env[1198]: time="2025-07-10T00:36:57.331641596Z" level=info msg="RemoveContainer for \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\" returns successfully" Jul 10 00:36:57.331908 kubelet[1917]: I0710 00:36:57.331863 1917 scope.go:117] "RemoveContainer" containerID="c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022" Jul 10 00:36:57.333399 env[1198]: time="2025-07-10T00:36:57.333346614Z" level=info msg="RemoveContainer for \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\"" Jul 10 00:36:57.336271 env[1198]: time="2025-07-10T00:36:57.336238551Z" level=info msg="RemoveContainer for \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\" returns successfully" Jul 10 00:36:57.336393 kubelet[1917]: I0710 00:36:57.336358 1917 scope.go:117] "RemoveContainer" containerID="076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33" Jul 10 00:36:57.337396 env[1198]: time="2025-07-10T00:36:57.337207656Z" level=info msg="RemoveContainer for \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\"" Jul 10 00:36:57.339942 env[1198]: time="2025-07-10T00:36:57.339891697Z" level=info msg="RemoveContainer for \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\" returns successfully" Jul 10 00:36:57.340051 kubelet[1917]: I0710 00:36:57.340029 1917 scope.go:117] "RemoveContainer" containerID="da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6" Jul 10 00:36:57.340283 env[1198]: time="2025-07-10T00:36:57.340224036Z" level=error msg="ContainerStatus for \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\": not found" Jul 10 00:36:57.340413 kubelet[1917]: E0710 00:36:57.340388 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\": not found" containerID="da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6" Jul 10 00:36:57.340479 kubelet[1917]: I0710 00:36:57.340425 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6"} err="failed to get container status \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\": rpc error: code = NotFound desc = an error occurred when try to find container \"da26691f062d1481678bc7d15003283b6f0525089745823682ecfdcd357f9ec6\": not found" Jul 10 00:36:57.340479 kubelet[1917]: I0710 00:36:57.340451 1917 scope.go:117] "RemoveContainer" containerID="b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275" Jul 10 00:36:57.340637 env[1198]: time="2025-07-10T00:36:57.340594689Z" level=error msg="ContainerStatus for \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\": not found" Jul 10 00:36:57.340723 kubelet[1917]: E0710 00:36:57.340707 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\": not found" containerID="b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275" Jul 10 00:36:57.340754 kubelet[1917]: I0710 00:36:57.340724 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275"} err="failed to get container status \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9d2c118945e412f43c508071ac0adeeb7b1dd36b3fc39e803898ede6e5f0275\": not found" Jul 10 00:36:57.340754 kubelet[1917]: I0710 00:36:57.340735 1917 scope.go:117] "RemoveContainer" containerID="e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56" Jul 10 00:36:57.340959 env[1198]: time="2025-07-10T00:36:57.340892573Z" level=error msg="ContainerStatus for \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\": not found" Jul 10 00:36:57.341121 kubelet[1917]: E0710 00:36:57.341086 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\": not found" containerID="e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56" Jul 10 00:36:57.341169 kubelet[1917]: I0710 00:36:57.341126 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56"} err="failed to get container status \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7c1f5d0db1a417bfea3bb04ecc27bbdd94367f5dd7777f770e7c8d354839d56\": not found" Jul 10 00:36:57.341169 kubelet[1917]: I0710 00:36:57.341146 1917 scope.go:117] "RemoveContainer" containerID="c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022" Jul 10 00:36:57.341334 env[1198]: time="2025-07-10T00:36:57.341292380Z" level=error msg="ContainerStatus for \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\": not found" Jul 10 00:36:57.341428 kubelet[1917]: E0710 00:36:57.341410 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\": not found" containerID="c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022" Jul 10 00:36:57.341478 kubelet[1917]: I0710 00:36:57.341430 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022"} err="failed to get container status \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\": rpc error: code = NotFound desc = an error occurred when try to find container \"c551b6acc34404b3e4b0fc06f8c4e5352d24e134e04532e01d81a3ac3f528022\": not found" Jul 10 00:36:57.341478 kubelet[1917]: I0710 00:36:57.341444 1917 scope.go:117] "RemoveContainer" containerID="076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33" Jul 10 00:36:57.341628 env[1198]: time="2025-07-10T00:36:57.341588059Z" level=error msg="ContainerStatus for \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\": not found" Jul 10 00:36:57.341727 kubelet[1917]: E0710 00:36:57.341706 1917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\": not found" containerID="076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33" Jul 10 00:36:57.341727 kubelet[1917]: I0710 00:36:57.341724 1917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33"} err="failed to get container status \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\": rpc error: code = NotFound desc = an error occurred when try to find container \"076299e4ec80a07309cd0571bbad5ed95002971ea1f43cefd24f2273b9eeaa33\": not found" Jul 10 00:36:58.175597 kubelet[1917]: E0710 00:36:58.175522 1917 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:36:58.254763 sshd[3555]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:58.257599 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:41672.service: Deactivated successfully. Jul 10 00:36:58.258269 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:36:58.258836 systemd-logind[1186]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:36:58.260048 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:45148.service. Jul 10 00:36:58.261183 systemd-logind[1186]: Removed session 23. Jul 10 00:36:58.294040 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 45148 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:58.295073 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:58.298429 systemd-logind[1186]: New session 24 of user core. Jul 10 00:36:58.299228 systemd[1]: Started session-24.scope. Jul 10 00:36:58.734564 sshd[3717]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:58.739840 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:45160.service. Jul 10 00:36:58.741884 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:45148.service: Deactivated successfully. Jul 10 00:36:58.742894 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:36:58.743612 systemd-logind[1186]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:36:58.744542 systemd-logind[1186]: Removed session 24. Jul 10 00:36:58.769285 systemd[1]: Created slice kubepods-burstable-poda6a3d565_14cf_4383_b207_fca455f58696.slice. Jul 10 00:36:58.778911 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 45160 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:58.780588 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:58.785620 systemd[1]: Started session-25.scope. Jul 10 00:36:58.786217 systemd-logind[1186]: New session 25 of user core. Jul 10 00:36:58.850178 kubelet[1917]: I0710 00:36:58.850121 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-cgroup\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850178 kubelet[1917]: I0710 00:36:58.850161 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-etc-cni-netd\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850178 kubelet[1917]: I0710 00:36:58.850185 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-xtables-lock\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850620 kubelet[1917]: I0710 00:36:58.850199 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-net\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850620 kubelet[1917]: I0710 00:36:58.850214 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-hubble-tls\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850620 kubelet[1917]: I0710 00:36:58.850231 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcfv2\" (UniqueName: \"kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-kube-api-access-fcfv2\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850620 kubelet[1917]: I0710 00:36:58.850244 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-bpf-maps\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850620 kubelet[1917]: I0710 00:36:58.850256 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cni-path\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850620 kubelet[1917]: I0710 00:36:58.850270 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-run\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850757 kubelet[1917]: I0710 00:36:58.850281 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-lib-modules\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850757 kubelet[1917]: I0710 00:36:58.850294 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6a3d565-14cf-4383-b207-fca455f58696-cilium-config-path\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850757 kubelet[1917]: I0710 00:36:58.850309 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-cilium-ipsec-secrets\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850757 kubelet[1917]: I0710 00:36:58.850349 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-kernel\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850757 kubelet[1917]: I0710 00:36:58.850363 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-hostproc\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.850865 kubelet[1917]: I0710 00:36:58.850400 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-clustermesh-secrets\") pod \"cilium-nqx94\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " pod="kube-system/cilium-nqx94" Jul 10 00:36:58.911663 sshd[3728]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:58.915177 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:45160.service: Deactivated successfully. Jul 10 00:36:58.915873 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:36:58.917800 systemd-logind[1186]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:36:58.918927 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:45172.service. Jul 10 00:36:58.923334 systemd-logind[1186]: Removed session 25. Jul 10 00:36:58.924058 kubelet[1917]: E0710 00:36:58.923888 1917 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-fcfv2 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-nqx94" podUID="a6a3d565-14cf-4383-b207-fca455f58696" Jul 10 00:36:58.963668 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 45172 ssh2: RSA SHA256:sjwemXrFIWSW6YMJmGZUZttp2LaJHY3bFypW68DkT1M Jul 10 00:36:58.965266 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:58.971446 systemd-logind[1186]: New session 26 of user core. Jul 10 00:36:58.972286 systemd[1]: Started session-26.scope. Jul 10 00:36:59.118064 kubelet[1917]: I0710 00:36:59.118024 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169dd06f-d173-4dfe-8294-687335d47d83" path="/var/lib/kubelet/pods/169dd06f-d173-4dfe-8294-687335d47d83/volumes" Jul 10 00:36:59.118588 kubelet[1917]: I0710 00:36:59.118566 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a42ec5-d39b-4b81-9a19-4de4895106ca" path="/var/lib/kubelet/pods/62a42ec5-d39b-4b81-9a19-4de4895106ca/volumes" Jul 10 00:36:59.454819 kubelet[1917]: I0710 00:36:59.454668 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcfv2\" (UniqueName: \"kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-kube-api-access-fcfv2\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.454819 kubelet[1917]: I0710 00:36:59.454710 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6a3d565-14cf-4383-b207-fca455f58696-cilium-config-path\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.454819 kubelet[1917]: I0710 00:36:59.454728 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-lib-modules\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.454819 kubelet[1917]: I0710 00:36:59.454740 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-etc-cni-netd\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.454819 kubelet[1917]: I0710 00:36:59.454753 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-kernel\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.454819 kubelet[1917]: I0710 00:36:59.454766 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-hostproc\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455196 kubelet[1917]: I0710 00:36:59.454780 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-cgroup\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455196 kubelet[1917]: I0710 00:36:59.454794 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-hubble-tls\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455196 kubelet[1917]: I0710 00:36:59.454811 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.455196 kubelet[1917]: I0710 00:36:59.454859 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.455196 kubelet[1917]: I0710 00:36:59.454878 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-hostproc" (OuterVolumeSpecName: "hostproc") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.455421 kubelet[1917]: I0710 00:36:59.455144 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.455421 kubelet[1917]: I0710 00:36:59.455174 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.455421 kubelet[1917]: I0710 00:36:59.455203 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-cilium-ipsec-secrets\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455421 kubelet[1917]: I0710 00:36:59.455228 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cni-path\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455421 kubelet[1917]: I0710 00:36:59.455247 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-bpf-maps\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455421 kubelet[1917]: I0710 00:36:59.455271 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-clustermesh-secrets\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455307 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-run\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455330 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-net\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455353 1917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-xtables-lock\") pod \"a6a3d565-14cf-4383-b207-fca455f58696\" (UID: \"a6a3d565-14cf-4383-b207-fca455f58696\") " Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455413 1917 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455428 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455440 1917 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.455658 kubelet[1917]: I0710 00:36:59.455451 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.455902 kubelet[1917]: I0710 00:36:59.455461 1917 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.455902 kubelet[1917]: I0710 00:36:59.455484 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.459676 kubelet[1917]: I0710 00:36:59.457802 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6a3d565-14cf-4383-b207-fca455f58696-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:36:59.458801 systemd[1]: var-lib-kubelet-pods-a6a3d565\x2d14cf\x2d4383\x2db207\x2dfca455f58696-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfcfv2.mount: Deactivated successfully. Jul 10 00:36:59.458893 systemd[1]: var-lib-kubelet-pods-a6a3d565\x2d14cf\x2d4383\x2db207\x2dfca455f58696-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:36:59.460941 systemd[1]: var-lib-kubelet-pods-a6a3d565\x2d14cf\x2d4383\x2db207\x2dfca455f58696-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:36:59.461021 systemd[1]: var-lib-kubelet-pods-a6a3d565\x2d14cf\x2d4383\x2db207\x2dfca455f58696-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:36:59.462000 kubelet[1917]: I0710 00:36:59.461959 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:36:59.462000 kubelet[1917]: I0710 00:36:59.461964 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:36:59.462120 kubelet[1917]: I0710 00:36:59.462002 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cni-path" (OuterVolumeSpecName: "cni-path") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.462120 kubelet[1917]: I0710 00:36:59.462019 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.462120 kubelet[1917]: I0710 00:36:59.462024 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.462120 kubelet[1917]: I0710 00:36:59.462037 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:36:59.462120 kubelet[1917]: I0710 00:36:59.462066 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:36:59.462338 kubelet[1917]: I0710 00:36:59.462311 1917 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-kube-api-access-fcfv2" (OuterVolumeSpecName: "kube-api-access-fcfv2") pod "a6a3d565-14cf-4383-b207-fca455f58696" (UID: "a6a3d565-14cf-4383-b207-fca455f58696"). InnerVolumeSpecName "kube-api-access-fcfv2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:36:59.555691 kubelet[1917]: I0710 00:36:59.555644 1917 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555691 kubelet[1917]: I0710 00:36:59.555674 1917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcfv2\" (UniqueName: \"kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-kube-api-access-fcfv2\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555691 kubelet[1917]: I0710 00:36:59.555695 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6a3d565-14cf-4383-b207-fca455f58696-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555708 1917 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6a3d565-14cf-4383-b207-fca455f58696-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555718 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555727 1917 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555736 1917 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555745 1917 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6a3d565-14cf-4383-b207-fca455f58696-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555754 1917 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.555881 kubelet[1917]: I0710 00:36:59.555763 1917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6a3d565-14cf-4383-b207-fca455f58696-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:37:00.116852 kubelet[1917]: E0710 00:37:00.116793 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:00.323319 systemd[1]: Removed slice kubepods-burstable-poda6a3d565_14cf_4383_b207_fca455f58696.slice. Jul 10 00:37:00.378944 systemd[1]: Created slice kubepods-burstable-pod299c736f_7ac7_4f9f_be77_a1e5b135ed91.slice. Jul 10 00:37:00.460737 kubelet[1917]: I0710 00:37:00.460682 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-host-proc-sys-net\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.460737 kubelet[1917]: I0710 00:37:00.460737 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-cilium-run\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461046 kubelet[1917]: I0710 00:37:00.460798 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-bpf-maps\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461046 kubelet[1917]: I0710 00:37:00.460816 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/299c736f-7ac7-4f9f-be77-a1e5b135ed91-cilium-config-path\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461046 kubelet[1917]: I0710 00:37:00.460837 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-host-proc-sys-kernel\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461046 kubelet[1917]: I0710 00:37:00.460859 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/299c736f-7ac7-4f9f-be77-a1e5b135ed91-cilium-ipsec-secrets\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461046 kubelet[1917]: I0710 00:37:00.460879 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/299c736f-7ac7-4f9f-be77-a1e5b135ed91-hubble-tls\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461046 kubelet[1917]: I0710 00:37:00.460902 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-cilium-cgroup\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461960 kubelet[1917]: I0710 00:37:00.460936 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-lib-modules\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461960 kubelet[1917]: I0710 00:37:00.460993 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njzkg\" (UniqueName: \"kubernetes.io/projected/299c736f-7ac7-4f9f-be77-a1e5b135ed91-kube-api-access-njzkg\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461960 kubelet[1917]: I0710 00:37:00.461031 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-cni-path\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461960 kubelet[1917]: I0710 00:37:00.461055 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/299c736f-7ac7-4f9f-be77-a1e5b135ed91-clustermesh-secrets\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461960 kubelet[1917]: I0710 00:37:00.461109 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-xtables-lock\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.461960 kubelet[1917]: I0710 00:37:00.461150 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-etc-cni-netd\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.462116 kubelet[1917]: I0710 00:37:00.461198 1917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/299c736f-7ac7-4f9f-be77-a1e5b135ed91-hostproc\") pod \"cilium-trhq8\" (UID: \"299c736f-7ac7-4f9f-be77-a1e5b135ed91\") " pod="kube-system/cilium-trhq8" Jul 10 00:37:00.684559 kubelet[1917]: E0710 00:37:00.684310 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:00.685635 env[1198]: time="2025-07-10T00:37:00.685547846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trhq8,Uid:299c736f-7ac7-4f9f-be77-a1e5b135ed91,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:00.706723 env[1198]: time="2025-07-10T00:37:00.706635465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:00.706723 env[1198]: time="2025-07-10T00:37:00.706683576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:00.706723 env[1198]: time="2025-07-10T00:37:00.706698985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:00.707053 env[1198]: time="2025-07-10T00:37:00.706853619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237 pid=3771 runtime=io.containerd.runc.v2 Jul 10 00:37:00.724540 systemd[1]: Started cri-containerd-fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237.scope. Jul 10 00:37:00.772839 env[1198]: time="2025-07-10T00:37:00.772778181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-trhq8,Uid:299c736f-7ac7-4f9f-be77-a1e5b135ed91,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\"" Jul 10 00:37:00.773760 kubelet[1917]: E0710 00:37:00.773719 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:00.780748 env[1198]: time="2025-07-10T00:37:00.780686575Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:37:00.795108 env[1198]: time="2025-07-10T00:37:00.794736505Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e\"" Jul 10 00:37:00.795672 env[1198]: time="2025-07-10T00:37:00.795598812Z" level=info msg="StartContainer for \"a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e\"" Jul 10 00:37:00.816278 systemd[1]: Started cri-containerd-a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e.scope. Jul 10 00:37:00.847818 env[1198]: time="2025-07-10T00:37:00.847738118Z" level=info msg="StartContainer for \"a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e\" returns successfully" Jul 10 00:37:00.857074 systemd[1]: cri-containerd-a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e.scope: Deactivated successfully. Jul 10 00:37:00.889731 env[1198]: time="2025-07-10T00:37:00.889660295Z" level=info msg="shim disconnected" id=a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e Jul 10 00:37:00.889731 env[1198]: time="2025-07-10T00:37:00.889721010Z" level=warning msg="cleaning up after shim disconnected" id=a25861f06f573ad74d1e4590d09dee5905211dbdb11a6039515a3875b1c8c80e namespace=k8s.io Jul 10 00:37:00.889731 env[1198]: time="2025-07-10T00:37:00.889730618Z" level=info msg="cleaning up dead shim" Jul 10 00:37:00.896713 env[1198]: time="2025-07-10T00:37:00.896682875Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3857 runtime=io.containerd.runc.v2\n" Jul 10 00:37:01.122010 kubelet[1917]: I0710 00:37:01.121896 1917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a3d565-14cf-4383-b207-fca455f58696" path="/var/lib/kubelet/pods/a6a3d565-14cf-4383-b207-fca455f58696/volumes" Jul 10 00:37:01.323998 kubelet[1917]: E0710 00:37:01.323904 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:01.329762 env[1198]: time="2025-07-10T00:37:01.329682691Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:37:01.356125 env[1198]: time="2025-07-10T00:37:01.355961863Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b\"" Jul 10 00:37:01.357158 env[1198]: time="2025-07-10T00:37:01.357112411Z" level=info msg="StartContainer for \"78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b\"" Jul 10 00:37:01.380246 systemd[1]: Started cri-containerd-78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b.scope. Jul 10 00:37:01.423874 systemd[1]: cri-containerd-78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b.scope: Deactivated successfully. Jul 10 00:37:01.513982 env[1198]: time="2025-07-10T00:37:01.513849979Z" level=info msg="StartContainer for \"78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b\" returns successfully" Jul 10 00:37:01.558897 env[1198]: time="2025-07-10T00:37:01.558823780Z" level=info msg="shim disconnected" id=78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b Jul 10 00:37:01.558897 env[1198]: time="2025-07-10T00:37:01.558897550Z" level=warning msg="cleaning up after shim disconnected" id=78e9c78875607197ceb6064fbfd94eb6670c8fe552eabc6b750540574ca6806b namespace=k8s.io Jul 10 00:37:01.558897 env[1198]: time="2025-07-10T00:37:01.558908912Z" level=info msg="cleaning up dead shim" Jul 10 00:37:01.566423 env[1198]: time="2025-07-10T00:37:01.566381191Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3920 runtime=io.containerd.runc.v2\n" Jul 10 00:37:02.327676 kubelet[1917]: E0710 00:37:02.327619 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:02.332573 env[1198]: time="2025-07-10T00:37:02.332530921Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:37:02.349510 env[1198]: time="2025-07-10T00:37:02.349449113Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b\"" Jul 10 00:37:02.350165 env[1198]: time="2025-07-10T00:37:02.350111505Z" level=info msg="StartContainer for \"d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b\"" Jul 10 00:37:02.368683 systemd[1]: Started cri-containerd-d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b.scope. Jul 10 00:37:02.397755 env[1198]: time="2025-07-10T00:37:02.397697021Z" level=info msg="StartContainer for \"d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b\" returns successfully" Jul 10 00:37:02.399331 systemd[1]: cri-containerd-d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b.scope: Deactivated successfully. Jul 10 00:37:02.421336 env[1198]: time="2025-07-10T00:37:02.421286573Z" level=info msg="shim disconnected" id=d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b Jul 10 00:37:02.421336 env[1198]: time="2025-07-10T00:37:02.421328984Z" level=warning msg="cleaning up after shim disconnected" id=d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b namespace=k8s.io Jul 10 00:37:02.421336 env[1198]: time="2025-07-10T00:37:02.421337801Z" level=info msg="cleaning up dead shim" Jul 10 00:37:02.429580 env[1198]: time="2025-07-10T00:37:02.429515697Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3976 runtime=io.containerd.runc.v2\n" Jul 10 00:37:02.569813 systemd[1]: run-containerd-runc-k8s.io-d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b-runc.tzopzI.mount: Deactivated successfully. Jul 10 00:37:02.569922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d489e732542f984c940470fc748fb165cf111755c5a02cdc97a84ab0a9d4779b-rootfs.mount: Deactivated successfully. Jul 10 00:37:03.176551 kubelet[1917]: E0710 00:37:03.176486 1917 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:37:03.330623 kubelet[1917]: E0710 00:37:03.330588 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:03.359965 env[1198]: time="2025-07-10T00:37:03.359902858Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:37:03.374548 env[1198]: time="2025-07-10T00:37:03.374487215Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7\"" Jul 10 00:37:03.375336 env[1198]: time="2025-07-10T00:37:03.375301467Z" level=info msg="StartContainer for \"d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7\"" Jul 10 00:37:03.396783 systemd[1]: Started cri-containerd-d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7.scope. Jul 10 00:37:03.423154 systemd[1]: cri-containerd-d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7.scope: Deactivated successfully. Jul 10 00:37:03.423965 env[1198]: time="2025-07-10T00:37:03.423440013Z" level=info msg="StartContainer for \"d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7\" returns successfully" Jul 10 00:37:03.443994 env[1198]: time="2025-07-10T00:37:03.443852214Z" level=info msg="shim disconnected" id=d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7 Jul 10 00:37:03.443994 env[1198]: time="2025-07-10T00:37:03.443910415Z" level=warning msg="cleaning up after shim disconnected" id=d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7 namespace=k8s.io Jul 10 00:37:03.443994 env[1198]: time="2025-07-10T00:37:03.443927407Z" level=info msg="cleaning up dead shim" Jul 10 00:37:03.451093 env[1198]: time="2025-07-10T00:37:03.451025702Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\ntime=\"2025-07-10T00:37:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Jul 10 00:37:03.569924 systemd[1]: run-containerd-runc-k8s.io-d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7-runc.tXyX2j.mount: Deactivated successfully. Jul 10 00:37:03.570084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2727da1523052916415f75b85a3f281ee0a0c6f76a1a6b682e1eef174c57cf7-rootfs.mount: Deactivated successfully. Jul 10 00:37:04.334957 kubelet[1917]: E0710 00:37:04.334902 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:04.340263 env[1198]: time="2025-07-10T00:37:04.340211030Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:37:04.357095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608736773.mount: Deactivated successfully. Jul 10 00:37:04.368424 env[1198]: time="2025-07-10T00:37:04.368358969Z" level=info msg="CreateContainer within sandbox \"fcf7af2d94dc4b8c7f103bd29cabfcd423d6c586578c4402b2ffc8e689165237\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5461fae723c140a6ee3b7c3ab2ce1fcdf1379f1d2133b4884204af477739f41d\"" Jul 10 00:37:04.370213 env[1198]: time="2025-07-10T00:37:04.369056169Z" level=info msg="StartContainer for \"5461fae723c140a6ee3b7c3ab2ce1fcdf1379f1d2133b4884204af477739f41d\"" Jul 10 00:37:04.385399 systemd[1]: Started cri-containerd-5461fae723c140a6ee3b7c3ab2ce1fcdf1379f1d2133b4884204af477739f41d.scope. Jul 10 00:37:04.410939 env[1198]: time="2025-07-10T00:37:04.410510814Z" level=info msg="StartContainer for \"5461fae723c140a6ee3b7c3ab2ce1fcdf1379f1d2133b4884204af477739f41d\" returns successfully" Jul 10 00:37:04.688411 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 10 00:37:05.339064 kubelet[1917]: E0710 00:37:05.339030 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:05.351525 kubelet[1917]: I0710 00:37:05.351433 1917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-trhq8" podStartSLOduration=5.351406414 podStartE2EDuration="5.351406414s" podCreationTimestamp="2025-07-10 00:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:05.351125629 +0000 UTC m=+92.657328724" watchObservedRunningTime="2025-07-10 00:37:05.351406414 +0000 UTC m=+92.657609500" Jul 10 00:37:05.741816 kubelet[1917]: I0710 00:37:05.741750 1917 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:37:05Z","lastTransitionTime":"2025-07-10T00:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:37:06.685251 kubelet[1917]: E0710 00:37:06.685190 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:07.210666 systemd[1]: run-containerd-runc-k8s.io-5461fae723c140a6ee3b7c3ab2ce1fcdf1379f1d2133b4884204af477739f41d-runc.KlRDeE.mount: Deactivated successfully. Jul 10 00:37:07.534601 systemd-networkd[1017]: lxc_health: Link UP Jul 10 00:37:07.557398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:37:07.556649 systemd-networkd[1017]: lxc_health: Gained carrier Jul 10 00:37:08.685525 kubelet[1917]: E0710 00:37:08.685471 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:08.850797 systemd-networkd[1017]: lxc_health: Gained IPv6LL Jul 10 00:37:09.345947 kubelet[1917]: E0710 00:37:09.345894 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:10.348402 kubelet[1917]: E0710 00:37:10.348340 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:11.117559 kubelet[1917]: E0710 00:37:11.117484 1917 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:13.629759 sshd[3742]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:13.632513 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:45172.service: Deactivated successfully. Jul 10 00:37:13.633492 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:37:13.634128 systemd-logind[1186]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:37:13.635181 systemd-logind[1186]: Removed session 26.