Jul 15 11:33:49.859797 kernel: Linux version 5.15.188-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Jul 15 10:04:37 -00 2025 Jul 15 11:33:49.859815 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:33:49.859824 kernel: BIOS-provided physical RAM map: Jul 15 11:33:49.859830 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 11:33:49.859835 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 11:33:49.859840 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 11:33:49.859847 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 11:33:49.859853 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 11:33:49.859860 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 11:33:49.859865 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 11:33:49.859871 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 11:33:49.859876 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 11:33:49.859882 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 11:33:49.859887 kernel: NX (Execute Disable) protection: active Jul 15 11:33:49.859895 kernel: SMBIOS 2.8 present. Jul 15 11:33:49.859901 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 11:33:49.859907 kernel: Hypervisor detected: KVM Jul 15 11:33:49.859913 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 11:33:49.859919 kernel: kvm-clock: cpu 0, msr 7819b001, primary cpu clock Jul 15 11:33:49.859925 kernel: kvm-clock: using sched offset of 2413756274 cycles Jul 15 11:33:49.859932 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 11:33:49.859938 kernel: tsc: Detected 2794.750 MHz processor Jul 15 11:33:49.859944 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 11:33:49.859951 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 11:33:49.859957 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 11:33:49.859964 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 11:33:49.859970 kernel: Using GB pages for direct mapping Jul 15 11:33:49.859976 kernel: ACPI: Early table checksum verification disabled Jul 15 11:33:49.859982 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 11:33:49.859988 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.859994 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.860000 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.860007 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 11:33:49.860013 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.860019 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.860025 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.860031 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 11:33:49.860038 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 11:33:49.860044 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 11:33:49.860050 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 11:33:49.860060 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 11:33:49.860066 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 11:33:49.860073 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 11:33:49.860079 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 11:33:49.860086 kernel: No NUMA configuration found Jul 15 11:33:49.860092 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 11:33:49.860100 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 15 11:33:49.860106 kernel: Zone ranges: Jul 15 11:33:49.860113 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 11:33:49.860119 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 11:33:49.860125 kernel: Normal empty Jul 15 11:33:49.860132 kernel: Movable zone start for each node Jul 15 11:33:49.860138 kernel: Early memory node ranges Jul 15 11:33:49.860145 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 11:33:49.860151 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 11:33:49.860159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 11:33:49.860165 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 11:33:49.860172 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 11:33:49.860178 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 11:33:49.860185 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 11:33:49.860191 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 11:33:49.860198 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 11:33:49.860204 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 11:33:49.860211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 11:33:49.860217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 11:33:49.860224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 11:33:49.860231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 11:33:49.860237 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 11:33:49.860244 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 11:33:49.860250 kernel: TSC deadline timer available Jul 15 11:33:49.860256 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 15 11:33:49.860263 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 11:33:49.860269 kernel: kvm-guest: setup PV sched yield Jul 15 11:33:49.860276 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 11:33:49.860283 kernel: Booting paravirtualized kernel on KVM Jul 15 11:33:49.860290 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 11:33:49.860296 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 15 11:33:49.860303 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 15 11:33:49.860309 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 15 11:33:49.860316 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 11:33:49.860322 kernel: kvm-guest: setup async PF for cpu 0 Jul 15 11:33:49.860328 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 15 11:33:49.860335 kernel: kvm-guest: PV spinlocks enabled Jul 15 11:33:49.860342 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 11:33:49.860349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 15 11:33:49.860355 kernel: Policy zone: DMA32 Jul 15 11:33:49.860363 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:33:49.860370 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 11:33:49.860376 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 11:33:49.860383 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 11:33:49.860390 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 11:33:49.860398 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 134796K reserved, 0K cma-reserved) Jul 15 11:33:49.860404 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 11:33:49.860411 kernel: ftrace: allocating 34607 entries in 136 pages Jul 15 11:33:49.860434 kernel: ftrace: allocated 136 pages with 2 groups Jul 15 11:33:49.860456 kernel: rcu: Hierarchical RCU implementation. Jul 15 11:33:49.860472 kernel: rcu: RCU event tracing is enabled. Jul 15 11:33:49.860478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 11:33:49.860485 kernel: Rude variant of Tasks RCU enabled. Jul 15 11:33:49.860491 kernel: Tracing variant of Tasks RCU enabled. Jul 15 11:33:49.860500 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 11:33:49.860507 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 11:33:49.860513 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 11:33:49.860520 kernel: random: crng init done Jul 15 11:33:49.860526 kernel: Console: colour VGA+ 80x25 Jul 15 11:33:49.860533 kernel: printk: console [ttyS0] enabled Jul 15 11:33:49.860550 kernel: ACPI: Core revision 20210730 Jul 15 11:33:49.860557 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 11:33:49.860563 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 11:33:49.860572 kernel: x2apic enabled Jul 15 11:33:49.860578 kernel: Switched APIC routing to physical x2apic. Jul 15 11:33:49.860585 kernel: kvm-guest: setup PV IPIs Jul 15 11:33:49.860591 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 11:33:49.860598 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 15 11:33:49.860604 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 11:33:49.860611 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 11:33:49.860618 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 11:33:49.860625 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 11:33:49.860637 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 11:33:49.860644 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 11:33:49.860655 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 11:33:49.860664 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 11:33:49.860717 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 11:33:49.860724 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 11:33:49.860731 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 15 11:33:49.860738 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 11:33:49.860746 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 11:33:49.860754 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 11:33:49.860761 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 11:33:49.860768 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 15 11:33:49.860783 kernel: Freeing SMP alternatives memory: 32K Jul 15 11:33:49.860790 kernel: pid_max: default: 32768 minimum: 301 Jul 15 11:33:49.860797 kernel: LSM: Security Framework initializing Jul 15 11:33:49.860804 kernel: SELinux: Initializing. Jul 15 11:33:49.860811 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:33:49.860819 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 11:33:49.860827 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 11:33:49.860834 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 11:33:49.860841 kernel: ... version: 0 Jul 15 11:33:49.860848 kernel: ... bit width: 48 Jul 15 11:33:49.860855 kernel: ... generic registers: 6 Jul 15 11:33:49.860862 kernel: ... value mask: 0000ffffffffffff Jul 15 11:33:49.860869 kernel: ... max period: 00007fffffffffff Jul 15 11:33:49.860875 kernel: ... fixed-purpose events: 0 Jul 15 11:33:49.860884 kernel: ... event mask: 000000000000003f Jul 15 11:33:49.860891 kernel: signal: max sigframe size: 1776 Jul 15 11:33:49.860898 kernel: rcu: Hierarchical SRCU implementation. Jul 15 11:33:49.860905 kernel: smp: Bringing up secondary CPUs ... Jul 15 11:33:49.860911 kernel: x86: Booting SMP configuration: Jul 15 11:33:49.860918 kernel: .... node #0, CPUs: #1 Jul 15 11:33:49.860925 kernel: kvm-clock: cpu 1, msr 7819b041, secondary cpu clock Jul 15 11:33:49.860932 kernel: kvm-guest: setup async PF for cpu 1 Jul 15 11:33:49.860939 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 15 11:33:49.860947 kernel: #2 Jul 15 11:33:49.860954 kernel: kvm-clock: cpu 2, msr 7819b081, secondary cpu clock Jul 15 11:33:49.860961 kernel: kvm-guest: setup async PF for cpu 2 Jul 15 11:33:49.860968 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 15 11:33:49.860975 kernel: #3 Jul 15 11:33:49.860982 kernel: kvm-clock: cpu 3, msr 7819b0c1, secondary cpu clock Jul 15 11:33:49.860989 kernel: kvm-guest: setup async PF for cpu 3 Jul 15 11:33:49.860996 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 15 11:33:49.861004 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 11:33:49.861014 kernel: smpboot: Max logical packages: 1 Jul 15 11:33:49.861022 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 11:33:49.861030 kernel: devtmpfs: initialized Jul 15 11:33:49.861037 kernel: x86/mm: Memory block size: 128MB Jul 15 11:33:49.861044 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 11:33:49.861051 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 11:33:49.861058 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 11:33:49.861065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 11:33:49.861072 kernel: audit: initializing netlink subsys (disabled) Jul 15 11:33:49.861081 kernel: audit: type=2000 audit(1752579229.878:1): state=initialized audit_enabled=0 res=1 Jul 15 11:33:49.861088 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 11:33:49.861095 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 11:33:49.861102 kernel: cpuidle: using governor menu Jul 15 11:33:49.861109 kernel: ACPI: bus type PCI registered Jul 15 11:33:49.861116 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 11:33:49.861123 kernel: dca service started, version 1.12.1 Jul 15 11:33:49.861130 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 15 11:33:49.861137 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 15 11:33:49.861145 kernel: PCI: Using configuration type 1 for base access Jul 15 11:33:49.861153 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 11:33:49.861160 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 11:33:49.861167 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 11:33:49.861173 kernel: ACPI: Added _OSI(Module Device) Jul 15 11:33:49.861180 kernel: ACPI: Added _OSI(Processor Device) Jul 15 11:33:49.861187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 11:33:49.861194 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 15 11:33:49.861201 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 15 11:33:49.861208 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 15 11:33:49.861215 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 11:33:49.861222 kernel: ACPI: Interpreter enabled Jul 15 11:33:49.861229 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 11:33:49.861235 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 11:33:49.861242 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 11:33:49.861249 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 11:33:49.861256 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 11:33:49.861379 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 11:33:49.861453 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 11:33:49.861521 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 11:33:49.861530 kernel: PCI host bridge to bus 0000:00 Jul 15 11:33:49.861624 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 11:33:49.861685 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 11:33:49.861744 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 11:33:49.861817 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 11:33:49.861878 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 11:33:49.861937 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 11:33:49.862000 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 11:33:49.862081 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 15 11:33:49.862157 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 15 11:33:49.862226 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 15 11:33:49.862298 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 15 11:33:49.862366 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 15 11:33:49.862432 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 11:33:49.862524 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 15 11:33:49.862616 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 15 11:33:49.862707 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 15 11:33:49.862799 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 11:33:49.862881 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 15 11:33:49.862949 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 15 11:33:49.863018 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 15 11:33:49.863086 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 11:33:49.863164 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 15 11:33:49.863231 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 15 11:33:49.863301 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 15 11:33:49.863368 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 11:33:49.863435 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 15 11:33:49.863510 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 15 11:33:49.863594 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 11:33:49.863675 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 15 11:33:49.863743 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 15 11:33:49.863826 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 15 11:33:49.863901 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 15 11:33:49.863968 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 15 11:33:49.863977 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 11:33:49.863984 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 11:33:49.863991 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 11:33:49.863998 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 11:33:49.864005 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 11:33:49.864014 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 11:33:49.864021 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 11:33:49.864028 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 11:33:49.864035 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 11:33:49.864042 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 11:33:49.864049 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 11:33:49.864056 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 11:33:49.864063 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 11:33:49.864070 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 11:33:49.864078 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 11:33:49.864085 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 11:33:49.864092 kernel: iommu: Default domain type: Translated Jul 15 11:33:49.864099 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 11:33:49.864165 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 11:33:49.864234 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 11:33:49.864301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 11:33:49.864310 kernel: vgaarb: loaded Jul 15 11:33:49.864317 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 15 11:33:49.864326 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 15 11:33:49.864333 kernel: PTP clock support registered Jul 15 11:33:49.864340 kernel: PCI: Using ACPI for IRQ routing Jul 15 11:33:49.864347 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 11:33:49.864354 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 11:33:49.864361 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 11:33:49.864368 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 11:33:49.864375 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 11:33:49.864382 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 11:33:49.864390 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 11:33:49.864397 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 11:33:49.864404 kernel: pnp: PnP ACPI init Jul 15 11:33:49.864483 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 11:33:49.864494 kernel: pnp: PnP ACPI: found 6 devices Jul 15 11:33:49.864501 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 11:33:49.864508 kernel: NET: Registered PF_INET protocol family Jul 15 11:33:49.864515 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 11:33:49.864524 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 11:33:49.864531 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 11:33:49.864538 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 11:33:49.864557 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 15 11:33:49.864564 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 11:33:49.864571 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:33:49.864578 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 11:33:49.864585 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 11:33:49.864592 kernel: NET: Registered PF_XDP protocol family Jul 15 11:33:49.864660 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 11:33:49.864720 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 11:33:49.864789 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 11:33:49.864851 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 11:33:49.864911 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 11:33:49.864970 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 11:33:49.864979 kernel: PCI: CLS 0 bytes, default 64 Jul 15 11:33:49.864986 kernel: Initialise system trusted keyrings Jul 15 11:33:49.864996 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 11:33:49.865004 kernel: Key type asymmetric registered Jul 15 11:33:49.865010 kernel: Asymmetric key parser 'x509' registered Jul 15 11:33:49.865017 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 11:33:49.865024 kernel: io scheduler mq-deadline registered Jul 15 11:33:49.865031 kernel: io scheduler kyber registered Jul 15 11:33:49.865039 kernel: io scheduler bfq registered Jul 15 11:33:49.865046 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 11:33:49.865053 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 11:33:49.865062 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 11:33:49.865069 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 11:33:49.865076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 11:33:49.865083 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 11:33:49.865090 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 11:33:49.865097 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 11:33:49.865104 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 11:33:49.865173 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 11:33:49.865183 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 11:33:49.865249 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 11:33:49.865311 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T11:33:49 UTC (1752579229) Jul 15 11:33:49.865372 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 11:33:49.865381 kernel: NET: Registered PF_INET6 protocol family Jul 15 11:33:49.865388 kernel: Segment Routing with IPv6 Jul 15 11:33:49.865395 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 11:33:49.865402 kernel: NET: Registered PF_PACKET protocol family Jul 15 11:33:49.865409 kernel: Key type dns_resolver registered Jul 15 11:33:49.865418 kernel: IPI shorthand broadcast: enabled Jul 15 11:33:49.865425 kernel: sched_clock: Marking stable (393464722, 98033763)->(537467291, -45968806) Jul 15 11:33:49.865432 kernel: registered taskstats version 1 Jul 15 11:33:49.865439 kernel: Loading compiled-in X.509 certificates Jul 15 11:33:49.865446 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.188-flatcar: c4b3a19d3bd6de5654dc12075428550cf6251289' Jul 15 11:33:49.865453 kernel: Key type .fscrypt registered Jul 15 11:33:49.865459 kernel: Key type fscrypt-provisioning registered Jul 15 11:33:49.865481 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 11:33:49.865488 kernel: ima: Allocated hash algorithm: sha1 Jul 15 11:33:49.865497 kernel: ima: No architecture policies found Jul 15 11:33:49.865503 kernel: clk: Disabling unused clocks Jul 15 11:33:49.865510 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 15 11:33:49.865517 kernel: Write protecting the kernel read-only data: 28672k Jul 15 11:33:49.865524 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 15 11:33:49.865531 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 15 11:33:49.865538 kernel: Run /init as init process Jul 15 11:33:49.865557 kernel: with arguments: Jul 15 11:33:49.865564 kernel: /init Jul 15 11:33:49.865572 kernel: with environment: Jul 15 11:33:49.865579 kernel: HOME=/ Jul 15 11:33:49.865586 kernel: TERM=linux Jul 15 11:33:49.865593 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 11:33:49.865602 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:33:49.865611 systemd[1]: Detected virtualization kvm. Jul 15 11:33:49.865619 systemd[1]: Detected architecture x86-64. Jul 15 11:33:49.865644 systemd[1]: Running in initrd. Jul 15 11:33:49.865654 systemd[1]: No hostname configured, using default hostname. Jul 15 11:33:49.865661 systemd[1]: Hostname set to . Jul 15 11:33:49.865669 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:33:49.865677 systemd[1]: Queued start job for default target initrd.target. Jul 15 11:33:49.865684 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:33:49.865691 systemd[1]: Reached target cryptsetup.target. Jul 15 11:33:49.865698 systemd[1]: Reached target paths.target. Jul 15 11:33:49.865718 systemd[1]: Reached target slices.target. Jul 15 11:33:49.865727 systemd[1]: Reached target swap.target. Jul 15 11:33:49.865741 systemd[1]: Reached target timers.target. Jul 15 11:33:49.865750 systemd[1]: Listening on iscsid.socket. Jul 15 11:33:49.865758 systemd[1]: Listening on iscsiuio.socket. Jul 15 11:33:49.865784 systemd[1]: Listening on systemd-journald-audit.socket. Jul 15 11:33:49.865794 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 15 11:33:49.865802 systemd[1]: Listening on systemd-journald.socket. Jul 15 11:33:49.865809 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:33:49.865818 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:33:49.865826 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:33:49.865845 systemd[1]: Reached target sockets.target. Jul 15 11:33:49.865853 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:33:49.865861 systemd[1]: Finished network-cleanup.service. Jul 15 11:33:49.865869 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 11:33:49.865878 systemd[1]: Starting systemd-journald.service... Jul 15 11:33:49.865886 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:33:49.865901 systemd[1]: Starting systemd-resolved.service... Jul 15 11:33:49.865913 systemd[1]: Starting systemd-vconsole-setup.service... Jul 15 11:33:49.865923 systemd-journald[198]: Journal started Jul 15 11:33:49.865974 systemd-journald[198]: Runtime Journal (/run/log/journal/0b1deefbd73b4826a174b4bba7e0907f) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:33:49.858310 systemd-modules-load[199]: Inserted module 'overlay' Jul 15 11:33:49.891422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 11:33:49.891452 systemd[1]: Started systemd-journald.service. Jul 15 11:33:49.891467 kernel: Bridge firewalling registered Jul 15 11:33:49.874908 systemd-resolved[200]: Positive Trust Anchors: Jul 15 11:33:49.874917 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:33:49.874943 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:33:49.877069 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 15 11:33:49.891323 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 15 11:33:49.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.901829 systemd[1]: Started systemd-resolved.service. Jul 15 11:33:49.906076 kernel: audit: type=1130 audit(1752579229.901:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.906093 kernel: audit: type=1130 audit(1752579229.905:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.906277 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:33:49.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.910632 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 11:33:49.918321 kernel: audit: type=1130 audit(1752579229.909:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.918346 kernel: audit: type=1130 audit(1752579229.913:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.918356 kernel: audit: type=1130 audit(1752579229.917:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.914083 systemd[1]: Finished systemd-vconsole-setup.service. Jul 15 11:33:49.924855 kernel: SCSI subsystem initialized Jul 15 11:33:49.918401 systemd[1]: Reached target nss-lookup.target. Jul 15 11:33:49.922515 systemd[1]: Starting dracut-cmdline-ask.service... Jul 15 11:33:49.923819 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:33:49.928834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:33:49.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.933576 kernel: audit: type=1130 audit(1752579229.928:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.936067 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 11:33:49.936121 kernel: device-mapper: uevent: version 1.0.3 Jul 15 11:33:49.936132 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 15 11:33:49.938418 systemd[1]: Finished dracut-cmdline-ask.service. Jul 15 11:33:49.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.940043 systemd[1]: Starting dracut-cmdline.service... Jul 15 11:33:49.944352 kernel: audit: type=1130 audit(1752579229.938:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.943629 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 15 11:33:49.950669 kernel: audit: type=1130 audit(1752579229.944:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.944276 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:33:49.945701 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:33:49.953641 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:33:49.957755 kernel: audit: type=1130 audit(1752579229.953:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:49.960374 dracut-cmdline[216]: dracut-dracut-053 Jul 15 11:33:49.962648 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3fdbb2e3469f90ee764ea38c6fc4332d45967696e3c4fd4a8c65f8d0125b235b Jul 15 11:33:50.022570 kernel: Loading iSCSI transport class v2.0-870. Jul 15 11:33:50.038573 kernel: iscsi: registered transport (tcp) Jul 15 11:33:50.060041 kernel: iscsi: registered transport (qla4xxx) Jul 15 11:33:50.060060 kernel: QLogic iSCSI HBA Driver Jul 15 11:33:50.086506 systemd[1]: Finished dracut-cmdline.service. Jul 15 11:33:50.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:50.088097 systemd[1]: Starting dracut-pre-udev.service... Jul 15 11:33:50.132574 kernel: raid6: avx2x4 gen() 30210 MB/s Jul 15 11:33:50.149575 kernel: raid6: avx2x4 xor() 7651 MB/s Jul 15 11:33:50.166571 kernel: raid6: avx2x2 gen() 32416 MB/s Jul 15 11:33:50.183571 kernel: raid6: avx2x2 xor() 19186 MB/s Jul 15 11:33:50.200577 kernel: raid6: avx2x1 gen() 26386 MB/s Jul 15 11:33:50.217571 kernel: raid6: avx2x1 xor() 15380 MB/s Jul 15 11:33:50.234573 kernel: raid6: sse2x4 gen() 14646 MB/s Jul 15 11:33:50.251578 kernel: raid6: sse2x4 xor() 7011 MB/s Jul 15 11:33:50.268565 kernel: raid6: sse2x2 gen() 16293 MB/s Jul 15 11:33:50.285575 kernel: raid6: sse2x2 xor() 9839 MB/s Jul 15 11:33:50.302568 kernel: raid6: sse2x1 gen() 12373 MB/s Jul 15 11:33:50.319902 kernel: raid6: sse2x1 xor() 7777 MB/s Jul 15 11:33:50.319916 kernel: raid6: using algorithm avx2x2 gen() 32416 MB/s Jul 15 11:33:50.319926 kernel: raid6: .... xor() 19186 MB/s, rmw enabled Jul 15 11:33:50.320586 kernel: raid6: using avx2x2 recovery algorithm Jul 15 11:33:50.332567 kernel: xor: automatically using best checksumming function avx Jul 15 11:33:50.421567 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 15 11:33:50.429905 systemd[1]: Finished dracut-pre-udev.service. Jul 15 11:33:50.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:50.430000 audit: BPF prog-id=7 op=LOAD Jul 15 11:33:50.431000 audit: BPF prog-id=8 op=LOAD Jul 15 11:33:50.431944 systemd[1]: Starting systemd-udevd.service... Jul 15 11:33:50.444248 systemd-udevd[399]: Using default interface naming scheme 'v252'. Jul 15 11:33:50.448263 systemd[1]: Started systemd-udevd.service. Jul 15 11:33:50.448976 systemd[1]: Starting dracut-pre-trigger.service... Jul 15 11:33:50.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:50.458838 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 15 11:33:50.482148 systemd[1]: Finished dracut-pre-trigger.service. Jul 15 11:33:50.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:50.482915 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:33:50.514702 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:33:50.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:50.543992 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 11:33:50.563262 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 11:33:50.563277 kernel: AVX2 version of gcm_enc/dec engaged. Jul 15 11:33:50.563287 kernel: AES CTR mode by8 optimization enabled Jul 15 11:33:50.563301 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 11:33:50.563310 kernel: GPT:9289727 != 19775487 Jul 15 11:33:50.563318 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 11:33:50.563327 kernel: GPT:9289727 != 19775487 Jul 15 11:33:50.563337 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 11:33:50.563346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:50.564565 kernel: libata version 3.00 loaded. Jul 15 11:33:50.572572 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 11:33:50.577358 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 11:33:50.577375 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 15 11:33:50.577463 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 11:33:50.577557 kernel: scsi host0: ahci Jul 15 11:33:50.577670 kernel: scsi host1: ahci Jul 15 11:33:50.577762 kernel: scsi host2: ahci Jul 15 11:33:50.577845 kernel: scsi host3: ahci Jul 15 11:33:50.577934 kernel: scsi host4: ahci Jul 15 11:33:50.578019 kernel: scsi host5: ahci Jul 15 11:33:50.578114 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 15 11:33:50.578124 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 15 11:33:50.578132 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 15 11:33:50.578141 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 15 11:33:50.578150 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 15 11:33:50.578158 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 15 11:33:50.585569 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (446) Jul 15 11:33:50.586444 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 15 11:33:50.624527 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 15 11:33:50.626968 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 15 11:33:50.631827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 15 11:33:50.641185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:33:50.643707 systemd[1]: Starting disk-uuid.service... Jul 15 11:33:50.814587 disk-uuid[526]: Primary Header is updated. Jul 15 11:33:50.814587 disk-uuid[526]: Secondary Entries is updated. Jul 15 11:33:50.814587 disk-uuid[526]: Secondary Header is updated. Jul 15 11:33:50.818574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:50.821571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:50.886203 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:50.886273 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:50.886293 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:50.887573 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:50.888574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 11:33:50.889568 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 11:33:50.890644 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 11:33:50.890655 kernel: ata3.00: applying bridge limits Jul 15 11:33:50.891871 kernel: ata3.00: configured for UDMA/100 Jul 15 11:33:50.892570 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 11:33:50.925590 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 11:33:50.942078 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 11:33:50.942090 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 11:33:51.822476 disk-uuid[527]: The operation has completed successfully. Jul 15 11:33:51.823586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 11:33:51.842074 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 11:33:51.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:51.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:51.842148 systemd[1]: Finished disk-uuid.service. Jul 15 11:33:51.848120 systemd[1]: Starting verity-setup.service... Jul 15 11:33:51.860573 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 15 11:33:51.878121 systemd[1]: Found device dev-mapper-usr.device. Jul 15 11:33:51.880382 systemd[1]: Mounting sysusr-usr.mount... Jul 15 11:33:51.883652 systemd[1]: Finished verity-setup.service. Jul 15 11:33:51.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:51.939465 systemd[1]: Mounted sysusr-usr.mount. Jul 15 11:33:51.941001 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 15 11:33:51.940367 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 15 11:33:51.941120 systemd[1]: Starting ignition-setup.service... Jul 15 11:33:51.944033 systemd[1]: Starting parse-ip-for-networkd.service... Jul 15 11:33:51.949887 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:33:51.949939 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:33:51.949949 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:33:51.957212 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 15 11:33:51.964855 systemd[1]: Finished ignition-setup.service. Jul 15 11:33:51.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:51.965738 systemd[1]: Starting ignition-fetch-offline.service... Jul 15 11:33:51.998595 ignition[640]: Ignition 2.14.0 Jul 15 11:33:51.998604 ignition[640]: Stage: fetch-offline Jul 15 11:33:51.998669 ignition[640]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:51.998677 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:51.998779 ignition[640]: parsed url from cmdline: "" Jul 15 11:33:51.998782 ignition[640]: no config URL provided Jul 15 11:33:51.998786 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 11:33:51.998792 ignition[640]: no config at "/usr/lib/ignition/user.ign" Jul 15 11:33:51.998807 ignition[640]: op(1): [started] loading QEMU firmware config module Jul 15 11:33:51.998811 ignition[640]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 11:33:52.007499 systemd[1]: Finished parse-ip-for-networkd.service. Jul 15 11:33:52.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.008000 audit: BPF prog-id=9 op=LOAD Jul 15 11:33:52.007493 ignition[640]: op(1): [finished] loading QEMU firmware config module Jul 15 11:33:52.009921 systemd[1]: Starting systemd-networkd.service... Jul 15 11:33:52.049393 ignition[640]: parsing config with SHA512: 7adf032c801ac7a6f53207f3d1e2fe9eac84176f2d437d8f343de3e7454f493fc3ffba5dc5eae529809701313b2817455a15110d5532c6c7342712aa3752f02f Jul 15 11:33:52.055149 unknown[640]: fetched base config from "system" Jul 15 11:33:52.055159 unknown[640]: fetched user config from "qemu" Jul 15 11:33:52.055601 ignition[640]: fetch-offline: fetch-offline passed Jul 15 11:33:52.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.056469 systemd[1]: Finished ignition-fetch-offline.service. Jul 15 11:33:52.055645 ignition[640]: Ignition finished successfully Jul 15 11:33:52.070220 systemd-networkd[721]: lo: Link UP Jul 15 11:33:52.070230 systemd-networkd[721]: lo: Gained carrier Jul 15 11:33:52.071851 systemd-networkd[721]: Enumeration completed Jul 15 11:33:52.071925 systemd[1]: Started systemd-networkd.service. Jul 15 11:33:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.072694 systemd[1]: Reached target network.target. Jul 15 11:33:52.074753 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 11:33:52.075368 systemd[1]: Starting ignition-kargs.service... Jul 15 11:33:52.076527 systemd[1]: Starting iscsiuio.service... Jul 15 11:33:52.078215 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:33:52.080481 systemd[1]: Started iscsiuio.service. Jul 15 11:33:52.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.083024 systemd[1]: Starting iscsid.service... Jul 15 11:33:52.084307 ignition[723]: Ignition 2.14.0 Jul 15 11:33:52.084314 ignition[723]: Stage: kargs Jul 15 11:33:52.084609 systemd-networkd[721]: eth0: Link UP Jul 15 11:33:52.084390 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:52.084613 systemd-networkd[721]: eth0: Gained carrier Jul 15 11:33:52.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.084398 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:52.086488 systemd[1]: Finished ignition-kargs.service. Jul 15 11:33:52.085319 ignition[723]: kargs: kargs passed Jul 15 11:33:52.088692 systemd[1]: Starting ignition-disks.service... Jul 15 11:33:52.085351 ignition[723]: Ignition finished successfully Jul 15 11:33:52.094569 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:33:52.094569 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 15 11:33:52.094569 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 15 11:33:52.094569 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 15 11:33:52.094569 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 15 11:33:52.094569 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 15 11:33:52.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.096958 systemd[1]: Finished ignition-disks.service. Jul 15 11:33:52.095145 ignition[733]: Ignition 2.14.0 Jul 15 11:33:52.100935 systemd[1]: Reached target initrd-root-device.target. Jul 15 11:33:52.095152 ignition[733]: Stage: disks Jul 15 11:33:52.102425 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:33:52.095245 ignition[733]: no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:52.104209 systemd[1]: Reached target local-fs.target. Jul 15 11:33:52.095253 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:52.106125 systemd[1]: Reached target sysinit.target. Jul 15 11:33:52.096192 ignition[733]: disks: disks passed Jul 15 11:33:52.108643 systemd[1]: Reached target basic.target. Jul 15 11:33:52.096230 ignition[733]: Ignition finished successfully Jul 15 11:33:52.117875 systemd[1]: Started iscsid.service. Jul 15 11:33:52.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.118494 systemd[1]: Starting dracut-initqueue.service... Jul 15 11:33:52.126616 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:33:52.127728 systemd[1]: Finished dracut-initqueue.service. Jul 15 11:33:52.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.128810 systemd[1]: Reached target remote-fs-pre.target. Jul 15 11:33:52.130266 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:33:52.131772 systemd[1]: Reached target remote-fs.target. Jul 15 11:33:52.134171 systemd[1]: Starting dracut-pre-mount.service... Jul 15 11:33:52.140637 systemd[1]: Finished dracut-pre-mount.service. Jul 15 11:33:52.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.141258 systemd[1]: Starting systemd-fsck-root.service... Jul 15 11:33:52.151246 systemd-fsck[754]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 15 11:33:52.156174 systemd[1]: Finished systemd-fsck-root.service. Jul 15 11:33:52.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.157809 systemd[1]: Mounting sysroot.mount... Jul 15 11:33:52.163464 systemd[1]: Mounted sysroot.mount. Jul 15 11:33:52.165603 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 15 11:33:52.164215 systemd[1]: Reached target initrd-root-fs.target. Jul 15 11:33:52.166645 systemd[1]: Mounting sysroot-usr.mount... Jul 15 11:33:52.167537 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 15 11:33:52.167577 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 11:33:52.167594 systemd[1]: Reached target ignition-diskful.target. Jul 15 11:33:52.169431 systemd[1]: Mounted sysroot-usr.mount. Jul 15 11:33:52.171414 systemd[1]: Starting initrd-setup-root.service... Jul 15 11:33:52.175945 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 11:33:52.178419 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Jul 15 11:33:52.182031 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 11:33:52.185690 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 11:33:52.209990 systemd[1]: Finished initrd-setup-root.service. Jul 15 11:33:52.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.210994 systemd[1]: Starting ignition-mount.service... Jul 15 11:33:52.212838 systemd[1]: Starting sysroot-boot.service... Jul 15 11:33:52.217368 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Jul 15 11:33:52.225218 ignition[807]: INFO : Ignition 2.14.0 Jul 15 11:33:52.225218 ignition[807]: INFO : Stage: mount Jul 15 11:33:52.226918 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:52.226918 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:52.226918 ignition[807]: INFO : mount: mount passed Jul 15 11:33:52.226918 ignition[807]: INFO : Ignition finished successfully Jul 15 11:33:52.230149 systemd[1]: Finished ignition-mount.service. Jul 15 11:33:52.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:52.231947 systemd[1]: Finished sysroot-boot.service. Jul 15 11:33:52.424640 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.101 Jul 15 11:33:52.424655 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 15 11:33:52.888761 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 15 11:33:52.897225 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jul 15 11:33:52.897255 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 11:33:52.897266 kernel: BTRFS info (device vda6): using free space tree Jul 15 11:33:52.897998 kernel: BTRFS info (device vda6): has skinny extents Jul 15 11:33:52.901894 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 15 11:33:52.904226 systemd[1]: Starting ignition-files.service... Jul 15 11:33:52.917450 ignition[836]: INFO : Ignition 2.14.0 Jul 15 11:33:52.917450 ignition[836]: INFO : Stage: files Jul 15 11:33:52.919229 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:52.919229 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:52.919229 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Jul 15 11:33:52.923707 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 11:33:52.923707 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 11:33:52.923707 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 11:33:52.923707 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 11:33:52.923707 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 11:33:52.923707 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:33:52.923707 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 11:33:52.921887 unknown[836]: wrote ssh authorized keys file for user: core Jul 15 11:33:52.996453 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 11:33:53.217855 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 11:33:53.219799 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:33:53.219799 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 11:33:53.577525 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 11:33:53.665676 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 11:33:53.665676 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:33:53.669499 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 11:33:54.046718 systemd-networkd[721]: eth0: Gained IPv6LL Jul 15 11:33:54.429670 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 11:33:54.921187 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 11:33:54.921187 ignition[836]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 11:33:54.924976 ignition[836]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:33:54.956717 ignition[836]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 11:33:54.958341 ignition[836]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 11:33:54.958341 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:33:54.958341 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 11:33:54.958341 ignition[836]: INFO : files: files passed Jul 15 11:33:54.958341 ignition[836]: INFO : Ignition finished successfully Jul 15 11:33:54.966380 systemd[1]: Finished ignition-files.service. Jul 15 11:33:54.971926 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 15 11:33:54.971946 kernel: audit: type=1130 audit(1752579234.966:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.968136 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 15 11:33:54.972935 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 15 11:33:54.973751 systemd[1]: Starting ignition-quench.service... Jul 15 11:33:54.984006 kernel: audit: type=1130 audit(1752579234.977:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.984023 kernel: audit: type=1131 audit(1752579234.977:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.975804 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 11:33:54.975876 systemd[1]: Finished ignition-quench.service. Jul 15 11:33:54.987096 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 15 11:33:54.989575 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 11:33:54.991523 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 15 11:33:54.996622 kernel: audit: type=1130 audit(1752579234.991:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:54.991677 systemd[1]: Reached target ignition-complete.target. Jul 15 11:33:54.998157 systemd[1]: Starting initrd-parse-etc.service... Jul 15 11:33:55.008739 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 11:33:55.008818 systemd[1]: Finished initrd-parse-etc.service. Jul 15 11:33:55.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.011283 systemd[1]: Reached target initrd-fs.target. Jul 15 11:33:55.018372 kernel: audit: type=1130 audit(1752579235.010:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.018388 kernel: audit: type=1131 audit(1752579235.010:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.017027 systemd[1]: Reached target initrd.target. Jul 15 11:33:55.019148 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 15 11:33:55.019864 systemd[1]: Starting dracut-pre-pivot.service... Jul 15 11:33:55.032118 systemd[1]: Finished dracut-pre-pivot.service. Jul 15 11:33:55.036493 kernel: audit: type=1130 audit(1752579235.031:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.036529 systemd[1]: Starting initrd-cleanup.service... Jul 15 11:33:55.045958 systemd[1]: Stopped target nss-lookup.target. Jul 15 11:33:55.046125 systemd[1]: Stopped target remote-cryptsetup.target. Jul 15 11:33:55.047638 systemd[1]: Stopped target timers.target. Jul 15 11:33:55.049154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 11:33:55.054842 kernel: audit: type=1131 audit(1752579235.049:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.049240 systemd[1]: Stopped dracut-pre-pivot.service. Jul 15 11:33:55.050588 systemd[1]: Stopped target initrd.target. Jul 15 11:33:55.055660 systemd[1]: Stopped target basic.target. Jul 15 11:33:55.056393 systemd[1]: Stopped target ignition-complete.target. Jul 15 11:33:55.057749 systemd[1]: Stopped target ignition-diskful.target. Jul 15 11:33:55.059191 systemd[1]: Stopped target initrd-root-device.target. Jul 15 11:33:55.060707 systemd[1]: Stopped target remote-fs.target. Jul 15 11:33:55.062229 systemd[1]: Stopped target remote-fs-pre.target. Jul 15 11:33:55.064458 systemd[1]: Stopped target sysinit.target. Jul 15 11:33:55.065211 systemd[1]: Stopped target local-fs.target. Jul 15 11:33:55.066646 systemd[1]: Stopped target local-fs-pre.target. Jul 15 11:33:55.068000 systemd[1]: Stopped target swap.target. Jul 15 11:33:55.074820 kernel: audit: type=1131 audit(1752579235.069:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.069352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 11:33:55.069431 systemd[1]: Stopped dracut-pre-mount.service. Jul 15 11:33:55.080859 kernel: audit: type=1131 audit(1752579235.076:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.069916 systemd[1]: Stopped target cryptsetup.target. Jul 15 11:33:55.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.075652 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 11:33:55.075730 systemd[1]: Stopped dracut-initqueue.service. Jul 15 11:33:55.076695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 11:33:55.076774 systemd[1]: Stopped ignition-fetch-offline.service. Jul 15 11:33:55.081810 systemd[1]: Stopped target paths.target. Jul 15 11:33:55.083163 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 11:33:55.084747 systemd[1]: Stopped systemd-ask-password-console.path. Jul 15 11:33:55.086573 systemd[1]: Stopped target slices.target. Jul 15 11:33:55.087367 systemd[1]: Stopped target sockets.target. Jul 15 11:33:55.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.088668 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 11:33:55.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.088723 systemd[1]: Closed iscsid.socket. Jul 15 11:33:55.090919 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 11:33:55.090973 systemd[1]: Closed iscsiuio.socket. Jul 15 11:33:55.091771 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 11:33:55.091850 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 15 11:33:55.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.093042 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 11:33:55.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.093141 systemd[1]: Stopped ignition-files.service. Jul 15 11:33:55.104820 ignition[876]: INFO : Ignition 2.14.0 Jul 15 11:33:55.104820 ignition[876]: INFO : Stage: umount Jul 15 11:33:55.104820 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 11:33:55.104820 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 11:33:55.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.096045 systemd[1]: Stopping ignition-mount.service... Jul 15 11:33:55.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.113452 ignition[876]: INFO : umount: umount passed Jul 15 11:33:55.113452 ignition[876]: INFO : Ignition finished successfully Jul 15 11:33:55.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.098207 systemd[1]: Stopping sysroot-boot.service... Jul 15 11:33:55.099448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 11:33:55.099698 systemd[1]: Stopped systemd-udev-trigger.service. Jul 15 11:33:55.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.100107 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 11:33:55.100193 systemd[1]: Stopped dracut-pre-trigger.service. Jul 15 11:33:55.106151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 11:33:55.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.106248 systemd[1]: Finished initrd-cleanup.service. Jul 15 11:33:55.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.107425 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 11:33:55.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.107501 systemd[1]: Stopped ignition-mount.service. Jul 15 11:33:55.109797 systemd[1]: Stopped target network.target. Jul 15 11:33:55.110999 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 11:33:55.111044 systemd[1]: Stopped ignition-disks.service. Jul 15 11:33:55.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.111887 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 11:33:55.111925 systemd[1]: Stopped ignition-kargs.service. Jul 15 11:33:55.113406 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 11:33:55.113436 systemd[1]: Stopped ignition-setup.service. Jul 15 11:33:55.139000 audit: BPF prog-id=6 op=UNLOAD Jul 15 11:33:55.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.114324 systemd[1]: Stopping systemd-networkd.service... Jul 15 11:33:55.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.115858 systemd[1]: Stopping systemd-resolved.service... Jul 15 11:33:55.117638 systemd-networkd[721]: eth0: DHCPv6 lease lost Jul 15 11:33:55.144000 audit: BPF prog-id=9 op=UNLOAD Jul 15 11:33:55.118362 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 11:33:55.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.118728 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 11:33:55.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.118800 systemd[1]: Stopped systemd-networkd.service. Jul 15 11:33:55.121690 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 11:33:55.121716 systemd[1]: Closed systemd-networkd.socket. Jul 15 11:33:55.124282 systemd[1]: Stopping network-cleanup.service... Jul 15 11:33:55.125022 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 11:33:55.125060 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 15 11:33:55.127018 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:33:55.127052 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:33:55.129090 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 11:33:55.129132 systemd[1]: Stopped systemd-modules-load.service. Jul 15 11:33:55.131307 systemd[1]: Stopping systemd-udevd.service... Jul 15 11:33:55.134125 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 11:33:55.134592 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 11:33:55.134672 systemd[1]: Stopped systemd-resolved.service. Jul 15 11:33:55.140047 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 11:33:55.140138 systemd[1]: Stopped network-cleanup.service. Jul 15 11:33:55.141490 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 11:33:55.141598 systemd[1]: Stopped systemd-udevd.service. Jul 15 11:33:55.143033 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 11:33:55.143061 systemd[1]: Closed systemd-udevd-control.socket. Jul 15 11:33:55.144758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 11:33:55.144800 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 15 11:33:55.146664 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 11:33:55.146698 systemd[1]: Stopped dracut-pre-udev.service. Jul 15 11:33:55.146810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 11:33:55.146849 systemd[1]: Stopped dracut-cmdline.service. Jul 15 11:33:55.146999 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 11:33:55.147027 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 15 11:33:55.147761 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 15 11:33:55.147972 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 11:33:55.148009 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 15 11:33:55.150870 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 11:33:55.150901 systemd[1]: Stopped kmod-static-nodes.service. Jul 15 11:33:55.152816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 11:33:55.152861 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 15 11:33:55.154612 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 11:33:55.155017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 11:33:55.155082 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 15 11:33:55.205442 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 11:33:55.205563 systemd[1]: Stopped sysroot-boot.service. Jul 15 11:33:55.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.207329 systemd[1]: Reached target initrd-switch-root.target. Jul 15 11:33:55.208769 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 11:33:55.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:55.208820 systemd[1]: Stopped initrd-setup-root.service. Jul 15 11:33:55.211313 systemd[1]: Starting initrd-switch-root.service... Jul 15 11:33:55.225226 systemd[1]: Switching root. Jul 15 11:33:55.245610 iscsid[732]: iscsid shutting down. Jul 15 11:33:55.246352 systemd-journald[198]: Journal stopped Jul 15 11:33:58.259844 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 15 11:33:58.259902 kernel: SELinux: Class mctp_socket not defined in policy. Jul 15 11:33:58.259916 kernel: SELinux: Class anon_inode not defined in policy. Jul 15 11:33:58.259926 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 15 11:33:58.259937 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 11:33:58.259947 kernel: SELinux: policy capability open_perms=1 Jul 15 11:33:58.259957 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 11:33:58.259966 kernel: SELinux: policy capability always_check_network=0 Jul 15 11:33:58.259975 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 11:33:58.259984 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 11:33:58.259994 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 11:33:58.260003 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 11:33:58.260014 systemd[1]: Successfully loaded SELinux policy in 41.472ms. Jul 15 11:33:58.260048 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.307ms. Jul 15 11:33:58.260060 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 15 11:33:58.260071 systemd[1]: Detected virtualization kvm. Jul 15 11:33:58.260080 systemd[1]: Detected architecture x86-64. Jul 15 11:33:58.260090 systemd[1]: Detected first boot. Jul 15 11:33:58.260102 systemd[1]: Initializing machine ID from VM UUID. Jul 15 11:33:58.260112 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 15 11:33:58.260121 systemd[1]: Populated /etc with preset unit settings. Jul 15 11:33:58.260131 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:33:58.260150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:33:58.260161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:33:58.260176 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 15 11:33:58.260187 systemd[1]: Stopped iscsiuio.service. Jul 15 11:33:58.260197 systemd[1]: iscsid.service: Deactivated successfully. Jul 15 11:33:58.260207 systemd[1]: Stopped iscsid.service. Jul 15 11:33:58.260217 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 11:33:58.260229 systemd[1]: Stopped initrd-switch-root.service. Jul 15 11:33:58.260239 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 11:33:58.260250 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 15 11:33:58.260260 systemd[1]: Created slice system-addon\x2drun.slice. Jul 15 11:33:58.260270 systemd[1]: Created slice system-getty.slice. Jul 15 11:33:58.260283 systemd[1]: Created slice system-modprobe.slice. Jul 15 11:33:58.260293 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 15 11:33:58.260303 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 15 11:33:58.260313 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 15 11:33:58.260323 systemd[1]: Created slice user.slice. Jul 15 11:33:58.260334 systemd[1]: Started systemd-ask-password-console.path. Jul 15 11:33:58.260344 systemd[1]: Started systemd-ask-password-wall.path. Jul 15 11:33:58.260354 systemd[1]: Set up automount boot.automount. Jul 15 11:33:58.260365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 15 11:33:58.260375 systemd[1]: Stopped target initrd-switch-root.target. Jul 15 11:33:58.260386 systemd[1]: Stopped target initrd-fs.target. Jul 15 11:33:58.260396 systemd[1]: Stopped target initrd-root-fs.target. Jul 15 11:33:58.260406 systemd[1]: Reached target integritysetup.target. Jul 15 11:33:58.260415 systemd[1]: Reached target remote-cryptsetup.target. Jul 15 11:33:58.260425 systemd[1]: Reached target remote-fs.target. Jul 15 11:33:58.260435 systemd[1]: Reached target slices.target. Jul 15 11:33:58.260445 systemd[1]: Reached target swap.target. Jul 15 11:33:58.260456 systemd[1]: Reached target torcx.target. Jul 15 11:33:58.260466 systemd[1]: Reached target veritysetup.target. Jul 15 11:33:58.260476 systemd[1]: Listening on systemd-coredump.socket. Jul 15 11:33:58.260486 systemd[1]: Listening on systemd-initctl.socket. Jul 15 11:33:58.260497 systemd[1]: Listening on systemd-networkd.socket. Jul 15 11:33:58.260507 systemd[1]: Listening on systemd-udevd-control.socket. Jul 15 11:33:58.260517 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 15 11:33:58.260527 systemd[1]: Listening on systemd-userdbd.socket. Jul 15 11:33:58.260554 systemd[1]: Mounting dev-hugepages.mount... Jul 15 11:33:58.260565 systemd[1]: Mounting dev-mqueue.mount... Jul 15 11:33:58.260577 systemd[1]: Mounting media.mount... Jul 15 11:33:58.260588 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:58.260598 systemd[1]: Mounting sys-kernel-debug.mount... Jul 15 11:33:58.260608 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 15 11:33:58.260618 systemd[1]: Mounting tmp.mount... Jul 15 11:33:58.260628 systemd[1]: Starting flatcar-tmpfiles.service... Jul 15 11:33:58.260638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:58.260649 systemd[1]: Starting kmod-static-nodes.service... Jul 15 11:33:58.260659 systemd[1]: Starting modprobe@configfs.service... Jul 15 11:33:58.260671 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:58.260681 systemd[1]: Starting modprobe@drm.service... Jul 15 11:33:58.260691 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:58.260701 systemd[1]: Starting modprobe@fuse.service... Jul 15 11:33:58.260711 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:58.260721 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 11:33:58.260734 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 11:33:58.260744 systemd[1]: Stopped systemd-fsck-root.service. Jul 15 11:33:58.260755 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 11:33:58.260765 kernel: loop: module loaded Jul 15 11:33:58.260775 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 11:33:58.260785 kernel: fuse: init (API version 7.34) Jul 15 11:33:58.260795 systemd[1]: Stopped systemd-journald.service. Jul 15 11:33:58.260805 systemd[1]: Starting systemd-journald.service... Jul 15 11:33:58.260815 systemd[1]: Starting systemd-modules-load.service... Jul 15 11:33:58.260826 systemd[1]: Starting systemd-network-generator.service... Jul 15 11:33:58.260836 systemd[1]: Starting systemd-remount-fs.service... Jul 15 11:33:58.260846 systemd[1]: Starting systemd-udev-trigger.service... Jul 15 11:33:58.260857 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 11:33:58.260867 systemd[1]: Stopped verity-setup.service. Jul 15 11:33:58.260877 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:58.260890 systemd-journald[992]: Journal started Jul 15 11:33:58.260925 systemd-journald[992]: Runtime Journal (/run/log/journal/0b1deefbd73b4826a174b4bba7e0907f) is 6.0M, max 48.5M, 42.5M free. Jul 15 11:33:55.317000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 11:33:56.048000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:33:56.048000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 15 11:33:56.048000 audit: BPF prog-id=10 op=LOAD Jul 15 11:33:56.048000 audit: BPF prog-id=10 op=UNLOAD Jul 15 11:33:56.048000 audit: BPF prog-id=11 op=LOAD Jul 15 11:33:56.048000 audit: BPF prog-id=11 op=UNLOAD Jul 15 11:33:56.078000 audit[911]: AVC avc: denied { associate } for pid=911 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 15 11:33:56.078000 audit[911]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e4 a1=c00002ae40 a2=c000029080 a3=32 items=0 ppid=894 pid=911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:56.078000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:33:56.080000 audit[911]: AVC avc: denied { associate } for pid=911 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 15 11:33:56.080000 audit[911]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=894 pid=911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:56.080000 audit: CWD cwd="/" Jul 15 11:33:56.080000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:56.080000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:56.080000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 15 11:33:58.129000 audit: BPF prog-id=12 op=LOAD Jul 15 11:33:58.129000 audit: BPF prog-id=3 op=UNLOAD Jul 15 11:33:58.129000 audit: BPF prog-id=13 op=LOAD Jul 15 11:33:58.129000 audit: BPF prog-id=14 op=LOAD Jul 15 11:33:58.129000 audit: BPF prog-id=4 op=UNLOAD Jul 15 11:33:58.129000 audit: BPF prog-id=5 op=UNLOAD Jul 15 11:33:58.130000 audit: BPF prog-id=15 op=LOAD Jul 15 11:33:58.130000 audit: BPF prog-id=12 op=UNLOAD Jul 15 11:33:58.130000 audit: BPF prog-id=16 op=LOAD Jul 15 11:33:58.130000 audit: BPF prog-id=17 op=LOAD Jul 15 11:33:58.130000 audit: BPF prog-id=13 op=UNLOAD Jul 15 11:33:58.130000 audit: BPF prog-id=14 op=UNLOAD Jul 15 11:33:58.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.143000 audit: BPF prog-id=15 op=UNLOAD Jul 15 11:33:58.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.243000 audit: BPF prog-id=18 op=LOAD Jul 15 11:33:58.243000 audit: BPF prog-id=19 op=LOAD Jul 15 11:33:58.243000 audit: BPF prog-id=20 op=LOAD Jul 15 11:33:58.243000 audit: BPF prog-id=16 op=UNLOAD Jul 15 11:33:58.243000 audit: BPF prog-id=17 op=UNLOAD Jul 15 11:33:58.258000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 15 11:33:58.258000 audit[992]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe0f2f2fe0 a2=4000 a3=7ffe0f2f307c items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:58.258000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 15 11:33:58.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.128859 systemd[1]: Queued start job for default target multi-user.target. Jul 15 11:33:56.077483 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:33:58.128868 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 15 11:33:56.077687 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:33:58.132360 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 11:33:56.077702 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:33:56.077727 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 15 11:33:56.077735 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 15 11:33:56.077761 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 15 11:33:56.077771 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 15 11:33:56.077944 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 15 11:33:56.077976 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 15 11:33:56.077987 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 15 11:33:56.078468 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 15 11:33:56.078501 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 15 11:33:56.078516 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 Jul 15 11:33:56.078529 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 15 11:33:56.078555 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 Jul 15 11:33:56.078567 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 15 11:33:57.864606 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:57.864834 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:57.864920 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:57.865066 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 15 11:33:57.865110 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 15 11:33:57.865159 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-07-15T11:33:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 15 11:33:58.264574 systemd[1]: Started systemd-journald.service. Jul 15 11:33:58.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.265120 systemd[1]: Mounted dev-hugepages.mount. Jul 15 11:33:58.265982 systemd[1]: Mounted dev-mqueue.mount. Jul 15 11:33:58.266821 systemd[1]: Mounted media.mount. Jul 15 11:33:58.267588 systemd[1]: Mounted sys-kernel-debug.mount. Jul 15 11:33:58.268442 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 15 11:33:58.269343 systemd[1]: Mounted tmp.mount. Jul 15 11:33:58.270239 systemd[1]: Finished flatcar-tmpfiles.service. Jul 15 11:33:58.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.271328 systemd[1]: Finished kmod-static-nodes.service. Jul 15 11:33:58.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.272366 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 11:33:58.272505 systemd[1]: Finished modprobe@configfs.service. Jul 15 11:33:58.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.273576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:58.273674 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:58.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.274687 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:33:58.274781 systemd[1]: Finished modprobe@drm.service. Jul 15 11:33:58.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.275812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:58.275926 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:58.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.277066 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 11:33:58.277174 systemd[1]: Finished modprobe@fuse.service. Jul 15 11:33:58.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.278154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:58.278262 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:58.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.279282 systemd[1]: Finished systemd-modules-load.service. Jul 15 11:33:58.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.280422 systemd[1]: Finished systemd-network-generator.service. Jul 15 11:33:58.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.281597 systemd[1]: Finished systemd-remount-fs.service. Jul 15 11:33:58.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.282854 systemd[1]: Reached target network-pre.target. Jul 15 11:33:58.284686 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 15 11:33:58.286295 systemd[1]: Mounting sys-kernel-config.mount... Jul 15 11:33:58.287278 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 11:33:58.288360 systemd[1]: Starting systemd-hwdb-update.service... Jul 15 11:33:58.290095 systemd[1]: Starting systemd-journal-flush.service... Jul 15 11:33:58.291202 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:58.291877 systemd[1]: Starting systemd-random-seed.service... Jul 15 11:33:58.292940 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:58.293756 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:33:58.295124 systemd-journald[992]: Time spent on flushing to /var/log/journal/0b1deefbd73b4826a174b4bba7e0907f is 13.759ms for 1104 entries. Jul 15 11:33:58.295124 systemd-journald[992]: System Journal (/var/log/journal/0b1deefbd73b4826a174b4bba7e0907f) is 8.0M, max 195.6M, 187.6M free. Jul 15 11:33:58.326713 systemd-journald[992]: Received client request to flush runtime journal. Jul 15 11:33:58.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.295401 systemd[1]: Starting systemd-sysusers.service... Jul 15 11:33:58.298983 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 15 11:33:58.300194 systemd[1]: Mounted sys-kernel-config.mount. Jul 15 11:33:58.327605 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 15 11:33:58.302434 systemd[1]: Finished systemd-random-seed.service. Jul 15 11:33:58.303817 systemd[1]: Finished systemd-udev-trigger.service. Jul 15 11:33:58.305113 systemd[1]: Reached target first-boot-complete.target. Jul 15 11:33:58.307138 systemd[1]: Starting systemd-udev-settle.service... Jul 15 11:33:58.308152 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:33:58.313477 systemd[1]: Finished systemd-sysusers.service. Jul 15 11:33:58.315217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 15 11:33:58.327286 systemd[1]: Finished systemd-journal-flush.service. Jul 15 11:33:58.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.332406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 15 11:33:58.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.724130 systemd[1]: Finished systemd-hwdb-update.service. Jul 15 11:33:58.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.725000 audit: BPF prog-id=21 op=LOAD Jul 15 11:33:58.725000 audit: BPF prog-id=22 op=LOAD Jul 15 11:33:58.725000 audit: BPF prog-id=7 op=UNLOAD Jul 15 11:33:58.725000 audit: BPF prog-id=8 op=UNLOAD Jul 15 11:33:58.726335 systemd[1]: Starting systemd-udevd.service... Jul 15 11:33:58.741169 systemd-udevd[1019]: Using default interface naming scheme 'v252'. Jul 15 11:33:58.753181 systemd[1]: Started systemd-udevd.service. Jul 15 11:33:58.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.754000 audit: BPF prog-id=23 op=LOAD Jul 15 11:33:58.758859 systemd[1]: Starting systemd-networkd.service... Jul 15 11:33:58.762000 audit: BPF prog-id=24 op=LOAD Jul 15 11:33:58.762000 audit: BPF prog-id=25 op=LOAD Jul 15 11:33:58.762000 audit: BPF prog-id=26 op=LOAD Jul 15 11:33:58.763501 systemd[1]: Starting systemd-userdbd.service... Jul 15 11:33:58.785017 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 15 11:33:58.789883 systemd[1]: Started systemd-userdbd.service. Jul 15 11:33:58.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.797770 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 15 11:33:58.811656 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 15 11:33:58.816629 kernel: ACPI: button: Power Button [PWRF] Jul 15 11:33:58.826327 systemd-networkd[1033]: lo: Link UP Jul 15 11:33:58.826559 systemd-networkd[1033]: lo: Gained carrier Jul 15 11:33:58.826960 systemd-networkd[1033]: Enumeration completed Jul 15 11:33:58.827094 systemd[1]: Started systemd-networkd.service. Jul 15 11:33:58.827316 systemd-networkd[1033]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 11:33:58.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.828670 systemd-networkd[1033]: eth0: Link UP Jul 15 11:33:58.828754 systemd-networkd[1033]: eth0: Gained carrier Jul 15 11:33:58.825000 audit[1035]: AVC avc: denied { confidentiality } for pid=1035 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 15 11:33:58.838675 systemd-networkd[1033]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 11:33:58.825000 audit[1035]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b1d139f280 a1=338ac a2=7fb348882bc5 a3=5 items=110 ppid=1019 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:33:58.825000 audit: CWD cwd="/" Jul 15 11:33:58.825000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=1 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=2 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=3 name=(null) inode=13141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=4 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=5 name=(null) inode=13142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=6 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=7 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=8 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=9 name=(null) inode=13144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=10 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=11 name=(null) inode=13145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=12 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=13 name=(null) inode=13146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=14 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=15 name=(null) inode=13147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=16 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=17 name=(null) inode=13148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=18 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=19 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=20 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=21 name=(null) inode=13150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=22 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=23 name=(null) inode=13151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=24 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=25 name=(null) inode=13152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=26 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=27 name=(null) inode=13153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=28 name=(null) inode=13149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=29 name=(null) inode=13154 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=30 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=31 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=32 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=33 name=(null) inode=13156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=34 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=35 name=(null) inode=13157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=36 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=37 name=(null) inode=13158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=38 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=39 name=(null) inode=13159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=40 name=(null) inode=13155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=41 name=(null) inode=13160 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=42 name=(null) inode=13140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=43 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=44 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=45 name=(null) inode=13162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=46 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=47 name=(null) inode=13163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=48 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=49 name=(null) inode=13164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=50 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=51 name=(null) inode=13165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=52 name=(null) inode=13161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=53 name=(null) inode=13166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=55 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=56 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=57 name=(null) inode=13168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=58 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=59 name=(null) inode=13169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=60 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=61 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=62 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=63 name=(null) inode=13171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=64 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=65 name=(null) inode=13172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=66 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=67 name=(null) inode=13173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=68 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=69 name=(null) inode=13174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=70 name=(null) inode=13170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=71 name=(null) inode=13175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=72 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=73 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=74 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=75 name=(null) inode=13177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=76 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=77 name=(null) inode=13178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=78 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=79 name=(null) inode=13179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=80 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=81 name=(null) inode=13180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=82 name=(null) inode=13176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=83 name=(null) inode=13181 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=84 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=85 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=86 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=87 name=(null) inode=13183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=88 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=89 name=(null) inode=13184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=90 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=91 name=(null) inode=13185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=92 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=93 name=(null) inode=13186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=94 name=(null) inode=13182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=95 name=(null) inode=13187 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=96 name=(null) inode=13167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=97 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=98 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=99 name=(null) inode=13189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=100 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=101 name=(null) inode=13190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=102 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=103 name=(null) inode=13191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=104 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=105 name=(null) inode=13192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=106 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=107 name=(null) inode=13193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PATH item=109 name=(null) inode=13194 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 15 11:33:58.825000 audit: PROCTITLE proctitle="(udev-worker)" Jul 15 11:33:58.848558 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 11:33:58.852560 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 11:33:58.885570 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 11:33:58.887860 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 15 11:33:58.887983 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 11:33:58.918790 kernel: kvm: Nested Virtualization enabled Jul 15 11:33:58.918825 kernel: SVM: kvm: Nested Paging enabled Jul 15 11:33:58.918851 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 15 11:33:58.918865 kernel: SVM: Virtual GIF supported Jul 15 11:33:58.936585 kernel: EDAC MC: Ver: 3.0.0 Jul 15 11:33:58.962867 systemd[1]: Finished systemd-udev-settle.service. Jul 15 11:33:58.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.964770 systemd[1]: Starting lvm2-activation-early.service... Jul 15 11:33:58.971344 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:33:58.998331 systemd[1]: Finished lvm2-activation-early.service. Jul 15 11:33:58.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:58.999346 systemd[1]: Reached target cryptsetup.target. Jul 15 11:33:59.001155 systemd[1]: Starting lvm2-activation.service... Jul 15 11:33:59.004368 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 15 11:33:59.039098 systemd[1]: Finished lvm2-activation.service. Jul 15 11:33:59.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.040020 systemd[1]: Reached target local-fs-pre.target. Jul 15 11:33:59.040838 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 11:33:59.040862 systemd[1]: Reached target local-fs.target. Jul 15 11:33:59.041632 systemd[1]: Reached target machines.target. Jul 15 11:33:59.043229 systemd[1]: Starting ldconfig.service... Jul 15 11:33:59.044121 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:59.044155 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:59.044988 systemd[1]: Starting systemd-boot-update.service... Jul 15 11:33:59.046690 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 15 11:33:59.049005 systemd[1]: Starting systemd-machine-id-commit.service... Jul 15 11:33:59.051062 systemd[1]: Starting systemd-sysext.service... Jul 15 11:33:59.052242 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1057 (bootctl) Jul 15 11:33:59.053062 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 15 11:33:59.055199 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 15 11:33:59.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.062048 systemd[1]: Unmounting usr-share-oem.mount... Jul 15 11:33:59.066189 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 15 11:33:59.066320 systemd[1]: Unmounted usr-share-oem.mount. Jul 15 11:33:59.074591 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 11:33:59.084559 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) Jul 15 11:33:59.084559 systemd-fsck[1065]: /dev/vda1: 790 files, 120725/258078 clusters Jul 15 11:33:59.086558 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 15 11:33:59.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.089610 systemd[1]: Mounting boot.mount... Jul 15 11:33:59.736392 systemd[1]: Mounted boot.mount. Jul 15 11:33:59.746572 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 11:33:59.750599 systemd[1]: Finished systemd-boot-update.service. Jul 15 11:33:59.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.763561 kernel: loop1: detected capacity change from 0 to 221472 Jul 15 11:33:59.767074 (sd-sysext)[1070]: Using extensions 'kubernetes'. Jul 15 11:33:59.767430 (sd-sysext)[1070]: Merged extensions into '/usr'. Jul 15 11:33:59.783620 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:59.784953 systemd[1]: Mounting usr-share-oem.mount... Jul 15 11:33:59.786141 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:33:59.787095 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:33:59.789569 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:33:59.792136 systemd[1]: Starting modprobe@loop.service... Jul 15 11:33:59.793310 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:33:59.793408 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:33:59.793510 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:33:59.795739 systemd[1]: Mounted usr-share-oem.mount. Jul 15 11:33:59.796724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:33:59.796827 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:33:59.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.797929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:33:59.798026 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:33:59.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.799118 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:33:59.799220 systemd[1]: Finished modprobe@loop.service. Jul 15 11:33:59.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.800778 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:33:59.800865 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:33:59.801761 systemd[1]: Finished systemd-sysext.service. Jul 15 11:33:59.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:33:59.803722 systemd[1]: Starting ensure-sysext.service... Jul 15 11:33:59.805274 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 15 11:33:59.810171 systemd[1]: Reloading. Jul 15 11:33:59.815292 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 15 11:33:59.816438 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 11:33:59.817431 ldconfig[1056]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 11:33:59.818322 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 11:33:59.868780 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-07-15T11:33:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:33:59.869094 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-07-15T11:33:59Z" level=info msg="torcx already run" Jul 15 11:33:59.926892 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:33:59.926906 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:33:59.943388 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:33:59.992261 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 11:33:59.994000 audit: BPF prog-id=27 op=LOAD Jul 15 11:33:59.996334 kernel: kauditd_printk_skb: 240 callbacks suppressed Jul 15 11:33:59.996400 kernel: audit: type=1334 audit(1752579239.994:164): prog-id=27 op=LOAD Jul 15 11:33:59.996435 kernel: audit: type=1334 audit(1752579239.994:165): prog-id=18 op=UNLOAD Jul 15 11:33:59.994000 audit: BPF prog-id=18 op=UNLOAD Jul 15 11:33:59.997316 kernel: audit: type=1334 audit(1752579239.995:166): prog-id=28 op=LOAD Jul 15 11:33:59.995000 audit: BPF prog-id=28 op=LOAD Jul 15 11:33:59.998252 kernel: audit: type=1334 audit(1752579239.997:167): prog-id=29 op=LOAD Jul 15 11:33:59.997000 audit: BPF prog-id=29 op=LOAD Jul 15 11:33:59.998692 systemd-networkd[1033]: eth0: Gained IPv6LL Jul 15 11:33:59.999377 kernel: audit: type=1334 audit(1752579239.997:168): prog-id=19 op=UNLOAD Jul 15 11:33:59.997000 audit: BPF prog-id=19 op=UNLOAD Jul 15 11:34:00.000337 kernel: audit: type=1334 audit(1752579239.997:169): prog-id=20 op=UNLOAD Jul 15 11:33:59.997000 audit: BPF prog-id=20 op=UNLOAD Jul 15 11:34:00.001317 kernel: audit: type=1334 audit(1752579239.999:170): prog-id=30 op=LOAD Jul 15 11:33:59.999000 audit: BPF prog-id=30 op=LOAD Jul 15 11:34:00.002327 kernel: audit: type=1334 audit(1752579240.001:171): prog-id=31 op=LOAD Jul 15 11:34:00.001000 audit: BPF prog-id=31 op=LOAD Jul 15 11:34:00.003347 kernel: audit: type=1334 audit(1752579240.001:172): prog-id=21 op=UNLOAD Jul 15 11:34:00.001000 audit: BPF prog-id=21 op=UNLOAD Jul 15 11:34:00.004325 kernel: audit: type=1334 audit(1752579240.001:173): prog-id=22 op=UNLOAD Jul 15 11:34:00.001000 audit: BPF prog-id=22 op=UNLOAD Jul 15 11:34:00.004000 audit: BPF prog-id=32 op=LOAD Jul 15 11:34:00.004000 audit: BPF prog-id=23 op=UNLOAD Jul 15 11:34:00.005000 audit: BPF prog-id=33 op=LOAD Jul 15 11:34:00.005000 audit: BPF prog-id=24 op=UNLOAD Jul 15 11:34:00.005000 audit: BPF prog-id=34 op=LOAD Jul 15 11:34:00.005000 audit: BPF prog-id=35 op=LOAD Jul 15 11:34:00.005000 audit: BPF prog-id=25 op=UNLOAD Jul 15 11:34:00.005000 audit: BPF prog-id=26 op=UNLOAD Jul 15 11:34:00.010068 systemd[1]: Finished ldconfig.service. Jul 15 11:34:00.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.011312 systemd[1]: Finished systemd-machine-id-commit.service. Jul 15 11:34:00.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.013341 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 15 11:34:00.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.017131 systemd[1]: Starting audit-rules.service... Jul 15 11:34:00.018818 systemd[1]: Starting clean-ca-certificates.service... Jul 15 11:34:00.021007 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 15 11:34:00.022000 audit: BPF prog-id=36 op=LOAD Jul 15 11:34:00.023794 systemd[1]: Starting systemd-resolved.service... Jul 15 11:34:00.024000 audit: BPF prog-id=37 op=LOAD Jul 15 11:34:00.026124 systemd[1]: Starting systemd-timesyncd.service... Jul 15 11:34:00.028040 systemd[1]: Starting systemd-update-utmp.service... Jul 15 11:34:00.029618 systemd[1]: Finished clean-ca-certificates.service. Jul 15 11:34:00.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.031000 audit[1151]: SYSTEM_BOOT pid=1151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.034635 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:34:00.037436 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:34:00.037667 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.039612 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:34:00.041417 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:34:00.043423 systemd[1]: Starting modprobe@loop.service... Jul 15 11:34:00.044366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.044474 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:34:00.044611 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:34:00.044681 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:34:00.045939 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 15 11:34:00.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.047462 systemd[1]: Finished systemd-update-utmp.service. Jul 15 11:34:00.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 15 11:34:00.048000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 15 11:34:00.048000 audit[1163]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffd11d61d0 a2=420 a3=0 items=0 ppid=1139 pid=1163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 15 11:34:00.048000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 15 11:34:00.048930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:34:00.049094 augenrules[1163]: No rules Jul 15 11:34:00.049030 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:34:00.050346 systemd[1]: Finished audit-rules.service. Jul 15 11:34:00.051656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:34:00.051768 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:34:00.053180 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:34:00.053289 systemd[1]: Finished modprobe@loop.service. Jul 15 11:34:00.056281 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:34:00.056468 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.057564 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:34:00.059395 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:34:00.061275 systemd[1]: Starting modprobe@loop.service... Jul 15 11:34:00.062074 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.062166 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:34:00.063401 systemd[1]: Starting systemd-update-done.service... Jul 15 11:34:00.064374 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:34:00.064482 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:34:00.065583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:34:00.065732 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:34:00.067166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:34:00.067261 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:34:00.068583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:34:00.068673 systemd[1]: Finished modprobe@loop.service. Jul 15 11:34:00.069914 systemd[1]: Finished systemd-update-done.service. Jul 15 11:34:00.073182 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:34:00.073374 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.074413 systemd[1]: Starting modprobe@dm_mod.service... Jul 15 11:34:00.891557 systemd-timesyncd[1149]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 11:34:00.891611 systemd-timesyncd[1149]: Initial clock synchronization to Tue 2025-07-15 11:34:00.891462 UTC. Jul 15 11:34:00.892748 systemd[1]: Starting modprobe@drm.service... Jul 15 11:34:00.894723 systemd[1]: Starting modprobe@efi_pstore.service... Jul 15 11:34:00.896504 systemd[1]: Starting modprobe@loop.service... Jul 15 11:34:00.897342 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.897435 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:34:00.898463 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 15 11:34:00.899469 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 11:34:00.899564 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 11:34:00.900279 systemd[1]: Started systemd-timesyncd.service. Jul 15 11:34:00.901925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 11:34:00.902034 systemd[1]: Finished modprobe@dm_mod.service. Jul 15 11:34:00.903269 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 11:34:00.903368 systemd[1]: Finished modprobe@drm.service. Jul 15 11:34:00.904513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 11:34:00.904609 systemd[1]: Finished modprobe@efi_pstore.service. Jul 15 11:34:00.905662 systemd-resolved[1148]: Positive Trust Anchors: Jul 15 11:34:00.905676 systemd-resolved[1148]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 11:34:00.905719 systemd-resolved[1148]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 15 11:34:00.905918 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 11:34:00.906008 systemd[1]: Finished modprobe@loop.service. Jul 15 11:34:00.907297 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 15 11:34:00.909817 systemd[1]: Finished ensure-sysext.service. Jul 15 11:34:00.911478 systemd[1]: Reached target time-set.target. Jul 15 11:34:00.912412 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 11:34:00.912445 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.913690 systemd-resolved[1148]: Defaulting to hostname 'linux'. Jul 15 11:34:00.914947 systemd[1]: Started systemd-resolved.service. Jul 15 11:34:00.915834 systemd[1]: Reached target network.target. Jul 15 11:34:00.916617 systemd[1]: Reached target network-online.target. Jul 15 11:34:00.917571 systemd[1]: Reached target nss-lookup.target. Jul 15 11:34:00.918410 systemd[1]: Reached target sysinit.target. Jul 15 11:34:00.919270 systemd[1]: Started motdgen.path. Jul 15 11:34:00.920015 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 15 11:34:00.921298 systemd[1]: Started logrotate.timer. Jul 15 11:34:00.922130 systemd[1]: Started mdadm.timer. Jul 15 11:34:00.922848 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 15 11:34:00.923717 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 11:34:00.923738 systemd[1]: Reached target paths.target. Jul 15 11:34:00.924494 systemd[1]: Reached target timers.target. Jul 15 11:34:00.925538 systemd[1]: Listening on dbus.socket. Jul 15 11:34:00.927073 systemd[1]: Starting docker.socket... Jul 15 11:34:00.929750 systemd[1]: Listening on sshd.socket. Jul 15 11:34:00.930592 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:34:00.930948 systemd[1]: Listening on docker.socket. Jul 15 11:34:00.931793 systemd[1]: Reached target sockets.target. Jul 15 11:34:00.932581 systemd[1]: Reached target basic.target. Jul 15 11:34:00.933390 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.933415 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 15 11:34:00.934190 systemd[1]: Starting containerd.service... Jul 15 11:34:00.935768 systemd[1]: Starting dbus.service... Jul 15 11:34:00.937259 systemd[1]: Starting enable-oem-cloudinit.service... Jul 15 11:34:00.938974 systemd[1]: Starting extend-filesystems.service... Jul 15 11:34:00.939959 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 15 11:34:00.942515 jq[1182]: false Jul 15 11:34:00.940875 systemd[1]: Starting kubelet.service... Jul 15 11:34:00.942424 systemd[1]: Starting motdgen.service... Jul 15 11:34:00.944085 systemd[1]: Starting prepare-helm.service... Jul 15 11:34:00.945762 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 15 11:34:00.947485 systemd[1]: Starting sshd-keygen.service... Jul 15 11:34:00.949729 extend-filesystems[1183]: Found loop1 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found sr0 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda1 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda2 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda3 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found usr Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda4 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda6 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda7 Jul 15 11:34:00.949729 extend-filesystems[1183]: Found vda9 Jul 15 11:34:00.949729 extend-filesystems[1183]: Checking size of /dev/vda9 Jul 15 11:34:00.950978 systemd[1]: Starting systemd-logind.service... Jul 15 11:34:00.955036 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 15 11:34:00.955074 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 11:34:00.955381 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 11:34:00.956724 systemd[1]: Starting update-engine.service... Jul 15 11:34:00.961374 extend-filesystems[1183]: Resized partition /dev/vda9 Jul 15 11:34:00.961446 dbus-daemon[1181]: [system] SELinux support is enabled Jul 15 11:34:00.963676 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 15 11:34:00.965499 systemd[1]: Started dbus.service. Jul 15 11:34:00.968563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 11:34:00.968722 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 15 11:34:00.970136 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 11:34:00.970274 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 15 11:34:00.973613 jq[1207]: true Jul 15 11:34:00.974050 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 11:34:00.974178 systemd[1]: Finished motdgen.service. Jul 15 11:34:00.977569 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 11:34:00.977594 systemd[1]: Reached target system-config.target. Jul 15 11:34:00.978581 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 11:34:00.978600 systemd[1]: Reached target user-config.target. Jul 15 11:34:00.979839 tar[1210]: linux-amd64/helm Jul 15 11:34:00.983291 extend-filesystems[1205]: resize2fs 1.46.5 (30-Dec-2021) Jul 15 11:34:00.988494 jq[1217]: true Jul 15 11:34:00.988724 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 11:34:01.003783 update_engine[1201]: I0715 11:34:01.003489 1201 main.cc:92] Flatcar Update Engine starting Jul 15 11:34:01.014446 systemd[1]: Started update-engine.service. Jul 15 11:34:01.016946 update_engine[1201]: I0715 11:34:01.016919 1201 update_check_scheduler.cc:74] Next update check in 4m26s Jul 15 11:34:01.016957 systemd[1]: Started locksmithd.service. Jul 15 11:34:01.022794 systemd-logind[1195]: Watching system buttons on /dev/input/event1 (Power Button) Jul 15 11:34:01.023605 systemd-logind[1195]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 11:34:01.023809 systemd-logind[1195]: New seat seat0. Jul 15 11:34:01.025201 systemd[1]: Started systemd-logind.service. Jul 15 11:34:01.029714 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 11:34:01.052572 env[1212]: time="2025-07-15T11:34:01.051676348Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 15 11:34:01.052833 extend-filesystems[1205]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 11:34:01.052833 extend-filesystems[1205]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 11:34:01.052833 extend-filesystems[1205]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 11:34:01.057096 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 11:34:01.061245 bash[1233]: Updated "/home/core/.ssh/authorized_keys" Jul 15 11:34:01.061317 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Jul 15 11:34:01.057236 systemd[1]: Finished extend-filesystems.service. Jul 15 11:34:01.059559 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 15 11:34:01.083657 env[1212]: time="2025-07-15T11:34:01.083602760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 15 11:34:01.088791 env[1212]: time="2025-07-15T11:34:01.088765280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:34:01.091859 env[1212]: time="2025-07-15T11:34:01.091822903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.188-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:34:01.091859 env[1212]: time="2025-07-15T11:34:01.091855854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092070 env[1212]: time="2025-07-15T11:34:01.092042003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092070 env[1212]: time="2025-07-15T11:34:01.092064215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092140 env[1212]: time="2025-07-15T11:34:01.092075947Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 15 11:34:01.092140 env[1212]: time="2025-07-15T11:34:01.092085535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092183 env[1212]: time="2025-07-15T11:34:01.092145648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092361 env[1212]: time="2025-07-15T11:34:01.092336105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092473 env[1212]: time="2025-07-15T11:34:01.092447804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 15 11:34:01.092473 env[1212]: time="2025-07-15T11:34:01.092466389Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 15 11:34:01.092560 env[1212]: time="2025-07-15T11:34:01.092506665Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 15 11:34:01.092560 env[1212]: time="2025-07-15T11:34:01.092516834Z" level=info msg="metadata content store policy set" policy=shared Jul 15 11:34:01.097501 env[1212]: time="2025-07-15T11:34:01.097472145Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 15 11:34:01.097553 env[1212]: time="2025-07-15T11:34:01.097501710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 15 11:34:01.097553 env[1212]: time="2025-07-15T11:34:01.097513983Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 15 11:34:01.097553 env[1212]: time="2025-07-15T11:34:01.097539201Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097553 env[1212]: time="2025-07-15T11:34:01.097552385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097641 env[1212]: time="2025-07-15T11:34:01.097567524Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097641 env[1212]: time="2025-07-15T11:34:01.097581470Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097641 env[1212]: time="2025-07-15T11:34:01.097593212Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097641 env[1212]: time="2025-07-15T11:34:01.097604673Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097641 env[1212]: time="2025-07-15T11:34:01.097616436Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097641 env[1212]: time="2025-07-15T11:34:01.097641673Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.097773 env[1212]: time="2025-07-15T11:34:01.097653986Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 15 11:34:01.097822 env[1212]: time="2025-07-15T11:34:01.097782978Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 15 11:34:01.097898 env[1212]: time="2025-07-15T11:34:01.097874509Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 15 11:34:01.098098 env[1212]: time="2025-07-15T11:34:01.098074675Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 15 11:34:01.098144 env[1212]: time="2025-07-15T11:34:01.098103008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098144 env[1212]: time="2025-07-15T11:34:01.098115852Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 15 11:34:01.098184 env[1212]: time="2025-07-15T11:34:01.098152310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098184 env[1212]: time="2025-07-15T11:34:01.098164974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098184 env[1212]: time="2025-07-15T11:34:01.098176345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098242 env[1212]: time="2025-07-15T11:34:01.098186114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098242 env[1212]: time="2025-07-15T11:34:01.098198627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098242 env[1212]: time="2025-07-15T11:34:01.098209938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098242 env[1212]: time="2025-07-15T11:34:01.098219616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098242 env[1212]: time="2025-07-15T11:34:01.098230186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098242 env[1212]: time="2025-07-15T11:34:01.098242870Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 15 11:34:01.098359 env[1212]: time="2025-07-15T11:34:01.098333851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098359 env[1212]: time="2025-07-15T11:34:01.098347446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098359 env[1212]: time="2025-07-15T11:34:01.098357806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098416 env[1212]: time="2025-07-15T11:34:01.098368365Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 15 11:34:01.098416 env[1212]: time="2025-07-15T11:34:01.098380538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 15 11:34:01.098416 env[1212]: time="2025-07-15T11:34:01.098390086Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 15 11:34:01.098416 env[1212]: time="2025-07-15T11:34:01.098407098Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 15 11:34:01.098494 env[1212]: time="2025-07-15T11:34:01.098442084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 15 11:34:01.098742 env[1212]: time="2025-07-15T11:34:01.098608275Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 15 11:34:01.099357 env[1212]: time="2025-07-15T11:34:01.098789916Z" level=info msg="Connect containerd service" Jul 15 11:34:01.099357 env[1212]: time="2025-07-15T11:34:01.098835491Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.099885400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.099990166Z" level=info msg="Start subscribing containerd event" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100023679Z" level=info msg="Start recovering state" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100064736Z" level=info msg="Start event monitor" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100080315Z" level=info msg="Start snapshots syncer" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100088601Z" level=info msg="Start cni network conf syncer for default" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100095213Z" level=info msg="Start streaming server" Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100351834Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100397831Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 11:34:01.101485 env[1212]: time="2025-07-15T11:34:01.100474274Z" level=info msg="containerd successfully booted in 0.069311s" Jul 15 11:34:01.100544 systemd[1]: Started containerd.service. Jul 15 11:34:01.103202 locksmithd[1235]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 11:34:01.398088 tar[1210]: linux-amd64/LICENSE Jul 15 11:34:01.398203 tar[1210]: linux-amd64/README.md Jul 15 11:34:01.402422 systemd[1]: Finished prepare-helm.service. Jul 15 11:34:01.652713 systemd[1]: Started kubelet.service. Jul 15 11:34:01.765231 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 11:34:01.783284 systemd[1]: Finished sshd-keygen.service. Jul 15 11:34:01.785507 systemd[1]: Starting issuegen.service... Jul 15 11:34:01.790535 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 11:34:01.790761 systemd[1]: Finished issuegen.service. Jul 15 11:34:01.793331 systemd[1]: Starting systemd-user-sessions.service... Jul 15 11:34:01.798455 systemd[1]: Finished systemd-user-sessions.service. Jul 15 11:34:01.801024 systemd[1]: Started getty@tty1.service. Jul 15 11:34:01.803020 systemd[1]: Started serial-getty@ttyS0.service. Jul 15 11:34:01.804145 systemd[1]: Reached target getty.target. Jul 15 11:34:01.805002 systemd[1]: Reached target multi-user.target. Jul 15 11:34:01.807063 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 15 11:34:01.813984 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 15 11:34:01.814133 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 15 11:34:01.815235 systemd[1]: Startup finished in 610ms (kernel) + 5.550s (initrd) + 5.724s (userspace) = 11.885s. Jul 15 11:34:02.045964 kubelet[1250]: E0715 11:34:02.045856 1250 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:34:02.047524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:34:02.047645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:34:03.485450 systemd[1]: Created slice system-sshd.slice. Jul 15 11:34:03.486290 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:59340.service. Jul 15 11:34:03.529118 sshd[1273]: Accepted publickey for core from 10.0.0.1 port 59340 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:03.530329 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:03.537829 systemd-logind[1195]: New session 1 of user core. Jul 15 11:34:03.538694 systemd[1]: Created slice user-500.slice. Jul 15 11:34:03.539676 systemd[1]: Starting user-runtime-dir@500.service... Jul 15 11:34:03.546835 systemd[1]: Finished user-runtime-dir@500.service. Jul 15 11:34:03.547901 systemd[1]: Starting user@500.service... Jul 15 11:34:03.550204 (systemd)[1276]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:03.616302 systemd[1276]: Queued start job for default target default.target. Jul 15 11:34:03.616798 systemd[1276]: Reached target paths.target. Jul 15 11:34:03.616825 systemd[1276]: Reached target sockets.target. Jul 15 11:34:03.616841 systemd[1276]: Reached target timers.target. Jul 15 11:34:03.616854 systemd[1276]: Reached target basic.target. Jul 15 11:34:03.616899 systemd[1276]: Reached target default.target. Jul 15 11:34:03.616928 systemd[1276]: Startup finished in 62ms. Jul 15 11:34:03.616980 systemd[1]: Started user@500.service. Jul 15 11:34:03.617906 systemd[1]: Started session-1.scope. Jul 15 11:34:03.667704 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:59344.service. Jul 15 11:34:03.708844 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 59344 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:03.710122 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:03.713741 systemd-logind[1195]: New session 2 of user core. Jul 15 11:34:03.714730 systemd[1]: Started session-2.scope. Jul 15 11:34:03.770908 sshd[1285]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:03.773482 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:59344.service: Deactivated successfully. Jul 15 11:34:03.774005 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 11:34:03.774445 systemd-logind[1195]: Session 2 logged out. Waiting for processes to exit. Jul 15 11:34:03.775462 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:59358.service. Jul 15 11:34:03.776080 systemd-logind[1195]: Removed session 2. Jul 15 11:34:03.816054 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 59358 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:03.817009 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:03.820340 systemd-logind[1195]: New session 3 of user core. Jul 15 11:34:03.821218 systemd[1]: Started session-3.scope. Jul 15 11:34:03.870769 sshd[1291]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:03.873307 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:59358.service: Deactivated successfully. Jul 15 11:34:03.873766 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 11:34:03.874252 systemd-logind[1195]: Session 3 logged out. Waiting for processes to exit. Jul 15 11:34:03.875037 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:59370.service. Jul 15 11:34:03.875829 systemd-logind[1195]: Removed session 3. Jul 15 11:34:03.916050 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 59370 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:03.917232 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:03.920486 systemd-logind[1195]: New session 4 of user core. Jul 15 11:34:03.921445 systemd[1]: Started session-4.scope. Jul 15 11:34:03.974448 sshd[1297]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:03.976964 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:59370.service: Deactivated successfully. Jul 15 11:34:03.977530 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 11:34:03.978070 systemd-logind[1195]: Session 4 logged out. Waiting for processes to exit. Jul 15 11:34:03.978902 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:59386.service. Jul 15 11:34:03.979670 systemd-logind[1195]: Removed session 4. Jul 15 11:34:04.017703 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 59386 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:34:04.018790 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:34:04.022173 systemd-logind[1195]: New session 5 of user core. Jul 15 11:34:04.023021 systemd[1]: Started session-5.scope. Jul 15 11:34:04.077418 sudo[1306]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 11:34:04.077609 sudo[1306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 15 11:34:04.097357 systemd[1]: Starting docker.service... Jul 15 11:34:04.129565 env[1318]: time="2025-07-15T11:34:04.129492623Z" level=info msg="Starting up" Jul 15 11:34:04.130660 env[1318]: time="2025-07-15T11:34:04.130636457Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:34:04.130660 env[1318]: time="2025-07-15T11:34:04.130652167Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:34:04.130766 env[1318]: time="2025-07-15T11:34:04.130670271Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:34:04.130766 env[1318]: time="2025-07-15T11:34:04.130691270Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:34:04.132087 env[1318]: time="2025-07-15T11:34:04.132023889Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 15 11:34:04.132087 env[1318]: time="2025-07-15T11:34:04.132036843Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 15 11:34:04.132087 env[1318]: time="2025-07-15T11:34:04.132047232Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 15 11:34:04.132087 env[1318]: time="2025-07-15T11:34:04.132054696Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 15 11:34:05.442924 env[1318]: time="2025-07-15T11:34:05.442870446Z" level=info msg="Loading containers: start." Jul 15 11:34:05.737713 kernel: Initializing XFRM netlink socket Jul 15 11:34:05.762477 env[1318]: time="2025-07-15T11:34:05.762436061Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 15 11:34:05.805073 systemd-networkd[1033]: docker0: Link UP Jul 15 11:34:05.980751 env[1318]: time="2025-07-15T11:34:05.980705667Z" level=info msg="Loading containers: done." Jul 15 11:34:05.990890 env[1318]: time="2025-07-15T11:34:05.990840860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 11:34:05.991062 env[1318]: time="2025-07-15T11:34:05.990990641Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 15 11:34:05.991093 env[1318]: time="2025-07-15T11:34:05.991078415Z" level=info msg="Daemon has completed initialization" Jul 15 11:34:06.006290 systemd[1]: Started docker.service. Jul 15 11:34:06.013195 env[1318]: time="2025-07-15T11:34:06.013113631Z" level=info msg="API listen on /run/docker.sock" Jul 15 11:34:06.720060 env[1212]: time="2025-07-15T11:34:06.720018919Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 11:34:08.282586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672835709.mount: Deactivated successfully. Jul 15 11:34:10.153395 env[1212]: time="2025-07-15T11:34:10.153334885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:10.155221 env[1212]: time="2025-07-15T11:34:10.155154247Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:10.156793 env[1212]: time="2025-07-15T11:34:10.156757943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:10.158503 env[1212]: time="2025-07-15T11:34:10.158477918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:10.159282 env[1212]: time="2025-07-15T11:34:10.159257891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 11:34:10.160033 env[1212]: time="2025-07-15T11:34:10.160004581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 11:34:12.109921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 11:34:12.110416 systemd[1]: Stopped kubelet.service. Jul 15 11:34:12.156461 systemd[1]: Starting kubelet.service... Jul 15 11:34:12.245546 systemd[1]: Started kubelet.service. Jul 15 11:34:12.330550 kubelet[1452]: E0715 11:34:12.330486 1452 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:34:12.333452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:34:12.333560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:34:13.208396 env[1212]: time="2025-07-15T11:34:13.208333847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:13.210103 env[1212]: time="2025-07-15T11:34:13.210074992Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:13.212137 env[1212]: time="2025-07-15T11:34:13.212090501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:13.214053 env[1212]: time="2025-07-15T11:34:13.214020069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:13.214804 env[1212]: time="2025-07-15T11:34:13.214763192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 11:34:13.215304 env[1212]: time="2025-07-15T11:34:13.215255926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 11:34:15.418694 env[1212]: time="2025-07-15T11:34:15.418610456Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:15.421112 env[1212]: time="2025-07-15T11:34:15.421050350Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:15.423124 env[1212]: time="2025-07-15T11:34:15.423067082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:15.425785 env[1212]: time="2025-07-15T11:34:15.425748139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:15.426529 env[1212]: time="2025-07-15T11:34:15.426476544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 11:34:15.427338 env[1212]: time="2025-07-15T11:34:15.427300750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 11:34:16.672306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256761352.mount: Deactivated successfully. Jul 15 11:34:17.874237 env[1212]: time="2025-07-15T11:34:17.874183114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:17.876003 env[1212]: time="2025-07-15T11:34:17.875976677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:17.877508 env[1212]: time="2025-07-15T11:34:17.877449859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:17.878775 env[1212]: time="2025-07-15T11:34:17.878729598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:17.879198 env[1212]: time="2025-07-15T11:34:17.879135249Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 11:34:17.879727 env[1212]: time="2025-07-15T11:34:17.879703755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 11:34:18.680194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664842867.mount: Deactivated successfully. Jul 15 11:34:22.037622 env[1212]: time="2025-07-15T11:34:22.037549094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:22.039643 env[1212]: time="2025-07-15T11:34:22.039597464Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:22.041193 env[1212]: time="2025-07-15T11:34:22.041166967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:22.043158 env[1212]: time="2025-07-15T11:34:22.043110932Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:22.043931 env[1212]: time="2025-07-15T11:34:22.043884703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 11:34:22.044386 env[1212]: time="2025-07-15T11:34:22.044356768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 11:34:22.359847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 11:34:22.360222 systemd[1]: Stopped kubelet.service. Jul 15 11:34:22.362310 systemd[1]: Starting kubelet.service... Jul 15 11:34:22.468353 systemd[1]: Started kubelet.service. Jul 15 11:34:22.824869 kubelet[1463]: E0715 11:34:22.824737 1463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 11:34:22.826704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 11:34:22.826850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 11:34:23.199783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825978008.mount: Deactivated successfully. Jul 15 11:34:23.206255 env[1212]: time="2025-07-15T11:34:23.206197149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:23.208183 env[1212]: time="2025-07-15T11:34:23.208142566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:23.209657 env[1212]: time="2025-07-15T11:34:23.209610288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:23.210981 env[1212]: time="2025-07-15T11:34:23.210956362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:23.211277 env[1212]: time="2025-07-15T11:34:23.211241968Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 11:34:23.211850 env[1212]: time="2025-07-15T11:34:23.211817176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 11:34:23.717249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821725220.mount: Deactivated successfully. Jul 15 11:34:27.532168 env[1212]: time="2025-07-15T11:34:27.532114312Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:27.534333 env[1212]: time="2025-07-15T11:34:27.534275444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:27.536463 env[1212]: time="2025-07-15T11:34:27.536432469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:27.538276 env[1212]: time="2025-07-15T11:34:27.538242302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:27.538985 env[1212]: time="2025-07-15T11:34:27.538949348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 11:34:29.721790 systemd[1]: Stopped kubelet.service. Jul 15 11:34:29.723822 systemd[1]: Starting kubelet.service... Jul 15 11:34:29.743046 systemd[1]: Reloading. Jul 15 11:34:29.807933 /usr/lib/systemd/system-generators/torcx-generator[1517]: time="2025-07-15T11:34:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:34:29.808274 /usr/lib/systemd/system-generators/torcx-generator[1517]: time="2025-07-15T11:34:29Z" level=info msg="torcx already run" Jul 15 11:34:30.035449 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:34:30.035464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:34:30.051993 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:34:30.121415 systemd[1]: Started kubelet.service. Jul 15 11:34:30.123914 systemd[1]: Stopping kubelet.service... Jul 15 11:34:30.124408 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:34:30.124543 systemd[1]: Stopped kubelet.service. Jul 15 11:34:30.125828 systemd[1]: Starting kubelet.service... Jul 15 11:34:30.209201 systemd[1]: Started kubelet.service. Jul 15 11:34:30.246625 kubelet[1570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:34:30.246625 kubelet[1570]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:34:30.246625 kubelet[1570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:34:30.247112 kubelet[1570]: I0715 11:34:30.246654 1570 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:34:30.660401 kubelet[1570]: I0715 11:34:30.660345 1570 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:34:30.660401 kubelet[1570]: I0715 11:34:30.660371 1570 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:34:30.660619 kubelet[1570]: I0715 11:34:30.660599 1570 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:34:30.678190 kubelet[1570]: E0715 11:34:30.678149 1570 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:30.679617 kubelet[1570]: I0715 11:34:30.679599 1570 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:34:30.688622 kubelet[1570]: E0715 11:34:30.688570 1570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:34:30.688622 kubelet[1570]: I0715 11:34:30.688607 1570 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:34:30.695093 kubelet[1570]: I0715 11:34:30.695067 1570 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:34:30.695773 kubelet[1570]: I0715 11:34:30.695745 1570 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:34:30.695946 kubelet[1570]: I0715 11:34:30.695904 1570 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:34:30.696172 kubelet[1570]: I0715 11:34:30.695936 1570 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:34:30.696282 kubelet[1570]: I0715 11:34:30.696173 1570 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:34:30.696282 kubelet[1570]: I0715 11:34:30.696187 1570 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:34:30.696344 kubelet[1570]: I0715 11:34:30.696300 1570 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:34:30.706747 kubelet[1570]: I0715 11:34:30.706709 1570 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:34:30.706747 kubelet[1570]: I0715 11:34:30.706740 1570 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:34:30.706841 kubelet[1570]: I0715 11:34:30.706784 1570 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:34:30.706841 kubelet[1570]: I0715 11:34:30.706812 1570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:34:30.718889 kubelet[1570]: I0715 11:34:30.718834 1570 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:34:30.719179 kubelet[1570]: I0715 11:34:30.719142 1570 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:34:30.719361 kubelet[1570]: W0715 11:34:30.719212 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:30.719361 kubelet[1570]: E0715 11:34:30.719263 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:30.719607 kubelet[1570]: W0715 11:34:30.719583 1570 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 11:34:30.719634 kubelet[1570]: W0715 11:34:30.719602 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:30.719665 kubelet[1570]: E0715 11:34:30.719645 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:30.724131 kubelet[1570]: I0715 11:34:30.724098 1570 server.go:1274] "Started kubelet" Jul 15 11:34:30.724175 kubelet[1570]: I0715 11:34:30.724150 1570 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:34:30.724930 kubelet[1570]: I0715 11:34:30.724474 1570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:34:30.724930 kubelet[1570]: I0715 11:34:30.724840 1570 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:34:30.725271 kubelet[1570]: I0715 11:34:30.725240 1570 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:34:30.726859 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 15 11:34:30.732136 kubelet[1570]: I0715 11:34:30.732111 1570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:34:30.733340 kubelet[1570]: I0715 11:34:30.733303 1570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:34:30.735236 kubelet[1570]: I0715 11:34:30.735218 1570 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:34:30.735543 kubelet[1570]: E0715 11:34:30.735522 1570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:34:30.736343 kubelet[1570]: I0715 11:34:30.736313 1570 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:34:30.736475 kubelet[1570]: I0715 11:34:30.736462 1570 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:34:30.738807 kubelet[1570]: W0715 11:34:30.738670 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:30.738904 kubelet[1570]: E0715 11:34:30.738811 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:30.738949 kubelet[1570]: E0715 11:34:30.738919 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Jul 15 11:34:30.740445 kubelet[1570]: E0715 11:34:30.740418 1570 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:34:30.741119 kubelet[1570]: I0715 11:34:30.741095 1570 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:34:30.741119 kubelet[1570]: I0715 11:34:30.741115 1570 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:34:30.741224 kubelet[1570]: I0715 11:34:30.741178 1570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:34:30.743934 kubelet[1570]: E0715 11:34:30.742413 1570 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1852698fa838b604 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 11:34:30.724072964 +0000 UTC m=+0.510520958,LastTimestamp:2025-07-15 11:34:30.724072964 +0000 UTC m=+0.510520958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 11:34:30.751795 kubelet[1570]: I0715 11:34:30.751767 1570 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:34:30.751892 kubelet[1570]: I0715 11:34:30.751813 1570 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:34:30.751892 kubelet[1570]: I0715 11:34:30.751832 1570 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:34:30.754328 kubelet[1570]: I0715 11:34:30.754291 1570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:34:30.755267 kubelet[1570]: I0715 11:34:30.755246 1570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:34:30.755322 kubelet[1570]: I0715 11:34:30.755274 1570 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:34:30.755322 kubelet[1570]: I0715 11:34:30.755295 1570 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:34:30.755416 kubelet[1570]: E0715 11:34:30.755335 1570 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:34:30.755957 kubelet[1570]: W0715 11:34:30.755933 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:30.756003 kubelet[1570]: E0715 11:34:30.755966 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:30.835654 kubelet[1570]: E0715 11:34:30.835598 1570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:34:30.856136 kubelet[1570]: E0715 11:34:30.856088 1570 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:34:30.936475 kubelet[1570]: E0715 11:34:30.936390 1570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:34:30.939892 kubelet[1570]: E0715 11:34:30.939864 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Jul 15 11:34:31.037087 kubelet[1570]: E0715 11:34:31.037053 1570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 11:34:31.056393 kubelet[1570]: E0715 11:34:31.056351 1570 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 11:34:31.059637 kubelet[1570]: I0715 11:34:31.059596 1570 policy_none.go:49] "None policy: Start" Jul 15 11:34:31.060411 kubelet[1570]: I0715 11:34:31.060391 1570 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:34:31.060470 kubelet[1570]: I0715 11:34:31.060426 1570 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:34:31.066056 systemd[1]: Created slice kubepods.slice. Jul 15 11:34:31.070050 systemd[1]: Created slice kubepods-burstable.slice. Jul 15 11:34:31.072072 systemd[1]: Created slice kubepods-besteffort.slice. Jul 15 11:34:31.080281 kubelet[1570]: I0715 11:34:31.080252 1570 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:34:31.080393 kubelet[1570]: I0715 11:34:31.080377 1570 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:34:31.080501 kubelet[1570]: I0715 11:34:31.080390 1570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:34:31.081000 kubelet[1570]: I0715 11:34:31.080601 1570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:34:31.081483 kubelet[1570]: E0715 11:34:31.081462 1570 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 11:34:31.181175 kubelet[1570]: I0715 11:34:31.181149 1570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:34:31.181467 kubelet[1570]: E0715 11:34:31.181438 1570 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 15 11:34:31.341334 kubelet[1570]: E0715 11:34:31.341229 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Jul 15 11:34:31.383319 kubelet[1570]: I0715 11:34:31.383276 1570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:34:31.383667 kubelet[1570]: E0715 11:34:31.383611 1570 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 15 11:34:31.463433 systemd[1]: Created slice kubepods-burstable-podd55443fb259276b1a888e6655305e83c.slice. Jul 15 11:34:31.471477 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 15 11:34:31.481151 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 15 11:34:31.540667 kubelet[1570]: I0715 11:34:31.540632 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d55443fb259276b1a888e6655305e83c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55443fb259276b1a888e6655305e83c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:31.540667 kubelet[1570]: I0715 11:34:31.540666 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d55443fb259276b1a888e6655305e83c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d55443fb259276b1a888e6655305e83c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:31.540809 kubelet[1570]: I0715 11:34:31.540697 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:31.540809 kubelet[1570]: I0715 11:34:31.540714 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:31.540809 kubelet[1570]: I0715 11:34:31.540728 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:31.540809 kubelet[1570]: I0715 11:34:31.540744 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:31.540809 kubelet[1570]: I0715 11:34:31.540759 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d55443fb259276b1a888e6655305e83c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55443fb259276b1a888e6655305e83c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:31.540919 kubelet[1570]: I0715 11:34:31.540785 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:31.540919 kubelet[1570]: I0715 11:34:31.540802 1570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:34:31.557235 kubelet[1570]: W0715 11:34:31.557186 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:31.557282 kubelet[1570]: E0715 11:34:31.557239 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:31.648395 kubelet[1570]: W0715 11:34:31.648347 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:31.648395 kubelet[1570]: E0715 11:34:31.648395 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:31.700467 kubelet[1570]: W0715 11:34:31.700408 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:31.700467 kubelet[1570]: E0715 11:34:31.700465 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:31.771358 kubelet[1570]: E0715 11:34:31.771313 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:31.771908 env[1212]: time="2025-07-15T11:34:31.771861888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d55443fb259276b1a888e6655305e83c,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:31.780085 kubelet[1570]: E0715 11:34:31.780050 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:31.780466 env[1212]: time="2025-07-15T11:34:31.780418611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:31.782576 kubelet[1570]: E0715 11:34:31.782545 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:31.782947 env[1212]: time="2025-07-15T11:34:31.782907368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:31.784814 kubelet[1570]: I0715 11:34:31.784765 1570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:34:31.785089 kubelet[1570]: E0715 11:34:31.785057 1570 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 15 11:34:32.142138 kubelet[1570]: E0715 11:34:32.142073 1570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="1.6s" Jul 15 11:34:32.308302 kubelet[1570]: W0715 11:34:32.308262 1570 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 15 11:34:32.308302 kubelet[1570]: E0715 11:34:32.308293 1570 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:32.586569 kubelet[1570]: I0715 11:34:32.586494 1570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:34:32.586881 kubelet[1570]: E0715 11:34:32.586798 1570 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 15 11:34:32.674539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914160267.mount: Deactivated successfully. Jul 15 11:34:32.681830 env[1212]: time="2025-07-15T11:34:32.681777770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.683212 kubelet[1570]: E0715 11:34:32.683182 1570 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 15 11:34:32.683468 env[1212]: time="2025-07-15T11:34:32.683428755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.684197 env[1212]: time="2025-07-15T11:34:32.684173582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.685103 env[1212]: time="2025-07-15T11:34:32.685070984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.689219 env[1212]: time="2025-07-15T11:34:32.689188605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.690436 env[1212]: time="2025-07-15T11:34:32.690412309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.691701 env[1212]: time="2025-07-15T11:34:32.691651262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.692821 env[1212]: time="2025-07-15T11:34:32.692800157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.694604 env[1212]: time="2025-07-15T11:34:32.694578711Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.696327 env[1212]: time="2025-07-15T11:34:32.696303365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.697504 env[1212]: time="2025-07-15T11:34:32.697478869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.698081 env[1212]: time="2025-07-15T11:34:32.698059458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:32.716971 env[1212]: time="2025-07-15T11:34:32.716910764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:32.716971 env[1212]: time="2025-07-15T11:34:32.716945108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:32.716971 env[1212]: time="2025-07-15T11:34:32.716954426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:32.717337 env[1212]: time="2025-07-15T11:34:32.717276159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1b4135084b10a642b7b1dfb125dde2906e716ee19df9dfd1c3ee9f281120152 pid=1614 runtime=io.containerd.runc.v2 Jul 15 11:34:32.729663 env[1212]: time="2025-07-15T11:34:32.729589958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:32.729663 env[1212]: time="2025-07-15T11:34:32.729629492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:32.729663 env[1212]: time="2025-07-15T11:34:32.729638699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:32.729906 env[1212]: time="2025-07-15T11:34:32.729797226Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/679ebca19b12a71557c3be55da467387e47f7bc210328361d3d6475c48c8dd62 pid=1648 runtime=io.containerd.runc.v2 Jul 15 11:34:32.730665 env[1212]: time="2025-07-15T11:34:32.730626651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:32.730665 env[1212]: time="2025-07-15T11:34:32.730651167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:32.730780 env[1212]: time="2025-07-15T11:34:32.730660234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:32.731256 systemd[1]: Started cri-containerd-b1b4135084b10a642b7b1dfb125dde2906e716ee19df9dfd1c3ee9f281120152.scope. Jul 15 11:34:32.731504 env[1212]: time="2025-07-15T11:34:32.731423115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0fbaf31a589ea5896e2bfaef7efc5562d3dece0dc0a4cf4d3be436cc398f0f3 pid=1644 runtime=io.containerd.runc.v2 Jul 15 11:34:32.744801 systemd[1]: Started cri-containerd-f0fbaf31a589ea5896e2bfaef7efc5562d3dece0dc0a4cf4d3be436cc398f0f3.scope. Jul 15 11:34:32.760472 systemd[1]: Started cri-containerd-679ebca19b12a71557c3be55da467387e47f7bc210328361d3d6475c48c8dd62.scope. Jul 15 11:34:32.775666 env[1212]: time="2025-07-15T11:34:32.775625654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d55443fb259276b1a888e6655305e83c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1b4135084b10a642b7b1dfb125dde2906e716ee19df9dfd1c3ee9f281120152\"" Jul 15 11:34:32.777117 kubelet[1570]: E0715 11:34:32.776961 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:32.779488 env[1212]: time="2025-07-15T11:34:32.779453321Z" level=info msg="CreateContainer within sandbox \"b1b4135084b10a642b7b1dfb125dde2906e716ee19df9dfd1c3ee9f281120152\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 11:34:32.780318 env[1212]: time="2025-07-15T11:34:32.780299608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0fbaf31a589ea5896e2bfaef7efc5562d3dece0dc0a4cf4d3be436cc398f0f3\"" Jul 15 11:34:32.780970 kubelet[1570]: E0715 11:34:32.780870 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:32.782064 env[1212]: time="2025-07-15T11:34:32.782046644Z" level=info msg="CreateContainer within sandbox \"f0fbaf31a589ea5896e2bfaef7efc5562d3dece0dc0a4cf4d3be436cc398f0f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 11:34:32.792028 env[1212]: time="2025-07-15T11:34:32.791989156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"679ebca19b12a71557c3be55da467387e47f7bc210328361d3d6475c48c8dd62\"" Jul 15 11:34:32.792627 kubelet[1570]: E0715 11:34:32.792607 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:32.793660 env[1212]: time="2025-07-15T11:34:32.793630904Z" level=info msg="CreateContainer within sandbox \"679ebca19b12a71557c3be55da467387e47f7bc210328361d3d6475c48c8dd62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 11:34:32.794201 env[1212]: time="2025-07-15T11:34:32.794157241Z" level=info msg="CreateContainer within sandbox \"b1b4135084b10a642b7b1dfb125dde2906e716ee19df9dfd1c3ee9f281120152\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a45ac253b511244794366595f6eafe8d2f97eca33cb247b046b779b0952f86e\"" Jul 15 11:34:32.794802 env[1212]: time="2025-07-15T11:34:32.794778546Z" level=info msg="StartContainer for \"0a45ac253b511244794366595f6eafe8d2f97eca33cb247b046b779b0952f86e\"" Jul 15 11:34:32.802862 env[1212]: time="2025-07-15T11:34:32.802815626Z" level=info msg="CreateContainer within sandbox \"f0fbaf31a589ea5896e2bfaef7efc5562d3dece0dc0a4cf4d3be436cc398f0f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42ed1e223fc69c21e905890eaf4782e1f879f6e2943ea836ea83f53b0de1bf2e\"" Jul 15 11:34:32.803332 env[1212]: time="2025-07-15T11:34:32.803308349Z" level=info msg="StartContainer for \"42ed1e223fc69c21e905890eaf4782e1f879f6e2943ea836ea83f53b0de1bf2e\"" Jul 15 11:34:32.807325 systemd[1]: Started cri-containerd-0a45ac253b511244794366595f6eafe8d2f97eca33cb247b046b779b0952f86e.scope. Jul 15 11:34:32.818592 env[1212]: time="2025-07-15T11:34:32.818549537Z" level=info msg="CreateContainer within sandbox \"679ebca19b12a71557c3be55da467387e47f7bc210328361d3d6475c48c8dd62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51a5dc0916c405759b56e6538d9953a85e0fffe546a02ce6897289cfb7a4968d\"" Jul 15 11:34:32.818969 env[1212]: time="2025-07-15T11:34:32.818947443Z" level=info msg="StartContainer for \"51a5dc0916c405759b56e6538d9953a85e0fffe546a02ce6897289cfb7a4968d\"" Jul 15 11:34:32.825793 systemd[1]: Started cri-containerd-42ed1e223fc69c21e905890eaf4782e1f879f6e2943ea836ea83f53b0de1bf2e.scope. Jul 15 11:34:32.833977 systemd[1]: Started cri-containerd-51a5dc0916c405759b56e6538d9953a85e0fffe546a02ce6897289cfb7a4968d.scope. Jul 15 11:34:32.849508 env[1212]: time="2025-07-15T11:34:32.849408227Z" level=info msg="StartContainer for \"0a45ac253b511244794366595f6eafe8d2f97eca33cb247b046b779b0952f86e\" returns successfully" Jul 15 11:34:32.872560 env[1212]: time="2025-07-15T11:34:32.872504412Z" level=info msg="StartContainer for \"42ed1e223fc69c21e905890eaf4782e1f879f6e2943ea836ea83f53b0de1bf2e\" returns successfully" Jul 15 11:34:32.873171 env[1212]: time="2025-07-15T11:34:32.873119676Z" level=info msg="StartContainer for \"51a5dc0916c405759b56e6538d9953a85e0fffe546a02ce6897289cfb7a4968d\" returns successfully" Jul 15 11:34:33.709691 kubelet[1570]: I0715 11:34:33.709435 1570 apiserver.go:52] "Watching apiserver" Jul 15 11:34:33.737336 kubelet[1570]: I0715 11:34:33.737278 1570 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:34:33.744622 kubelet[1570]: E0715 11:34:33.744584 1570 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 11:34:33.763474 kubelet[1570]: E0715 11:34:33.763445 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:33.764282 kubelet[1570]: E0715 11:34:33.764255 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:33.765648 kubelet[1570]: E0715 11:34:33.765627 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:34.070874 kubelet[1570]: E0715 11:34:34.070761 1570 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 15 11:34:34.187851 kubelet[1570]: I0715 11:34:34.187800 1570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:34:34.192807 kubelet[1570]: I0715 11:34:34.192774 1570 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:34:34.771444 kubelet[1570]: E0715 11:34:34.771401 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:35.647446 systemd[1]: Reloading. Jul 15 11:34:35.712898 /usr/lib/systemd/system-generators/torcx-generator[1872]: time="2025-07-15T11:34:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" Jul 15 11:34:35.712930 /usr/lib/systemd/system-generators/torcx-generator[1872]: time="2025-07-15T11:34:35Z" level=info msg="torcx already run" Jul 15 11:34:35.767505 kubelet[1570]: E0715 11:34:35.767476 1570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:35.773554 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 15 11:34:35.773570 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 15 11:34:35.793030 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 11:34:35.885472 systemd[1]: Stopping kubelet.service... Jul 15 11:34:35.907144 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 11:34:35.907336 systemd[1]: Stopped kubelet.service. Jul 15 11:34:35.908964 systemd[1]: Starting kubelet.service... Jul 15 11:34:36.002805 systemd[1]: Started kubelet.service. Jul 15 11:34:36.041354 kubelet[1918]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:34:36.041354 kubelet[1918]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 11:34:36.041354 kubelet[1918]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 11:34:36.041753 kubelet[1918]: I0715 11:34:36.041422 1918 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 11:34:36.050513 kubelet[1918]: I0715 11:34:36.050472 1918 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 11:34:36.050513 kubelet[1918]: I0715 11:34:36.050504 1918 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 11:34:36.050965 kubelet[1918]: I0715 11:34:36.050939 1918 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 11:34:36.054884 kubelet[1918]: I0715 11:34:36.052459 1918 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 11:34:36.054884 kubelet[1918]: I0715 11:34:36.054474 1918 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 11:34:36.058229 kubelet[1918]: E0715 11:34:36.058203 1918 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 15 11:34:36.058310 kubelet[1918]: I0715 11:34:36.058230 1918 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 15 11:34:36.063234 kubelet[1918]: I0715 11:34:36.063204 1918 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 11:34:36.063459 kubelet[1918]: I0715 11:34:36.063440 1918 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 11:34:36.063565 kubelet[1918]: I0715 11:34:36.063539 1918 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 11:34:36.063776 kubelet[1918]: I0715 11:34:36.063564 1918 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 11:34:36.063880 kubelet[1918]: I0715 11:34:36.063783 1918 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 11:34:36.063880 kubelet[1918]: I0715 11:34:36.063792 1918 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 11:34:36.063880 kubelet[1918]: I0715 11:34:36.063817 1918 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:34:36.063946 kubelet[1918]: I0715 11:34:36.063896 1918 kubelet.go:408] "Attempting to sync node with API server" Jul 15 11:34:36.063946 kubelet[1918]: I0715 11:34:36.063907 1918 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 11:34:36.063946 kubelet[1918]: I0715 11:34:36.063932 1918 kubelet.go:314] "Adding apiserver pod source" Jul 15 11:34:36.063946 kubelet[1918]: I0715 11:34:36.063944 1918 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 11:34:36.064445 kubelet[1918]: I0715 11:34:36.064425 1918 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 15 11:34:36.064919 kubelet[1918]: I0715 11:34:36.064895 1918 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 11:34:36.065437 kubelet[1918]: I0715 11:34:36.065415 1918 server.go:1274] "Started kubelet" Jul 15 11:34:36.065668 kubelet[1918]: I0715 11:34:36.065629 1918 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 11:34:36.065797 kubelet[1918]: I0715 11:34:36.065753 1918 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 11:34:36.066186 kubelet[1918]: I0715 11:34:36.066156 1918 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 11:34:36.070358 kubelet[1918]: I0715 11:34:36.069197 1918 server.go:449] "Adding debug handlers to kubelet server" Jul 15 11:34:36.073098 kubelet[1918]: E0715 11:34:36.072838 1918 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 11:34:36.074451 kubelet[1918]: I0715 11:34:36.073741 1918 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 11:34:36.074451 kubelet[1918]: I0715 11:34:36.073854 1918 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 11:34:36.074854 kubelet[1918]: I0715 11:34:36.074837 1918 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 11:34:36.074929 kubelet[1918]: I0715 11:34:36.074916 1918 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 11:34:36.075013 kubelet[1918]: I0715 11:34:36.075001 1918 reconciler.go:26] "Reconciler: start to sync state" Jul 15 11:34:36.075289 kubelet[1918]: I0715 11:34:36.075268 1918 factory.go:221] Registration of the systemd container factory successfully Jul 15 11:34:36.076417 kubelet[1918]: I0715 11:34:36.075509 1918 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 11:34:36.077454 kubelet[1918]: I0715 11:34:36.077224 1918 factory.go:221] Registration of the containerd container factory successfully Jul 15 11:34:36.087301 kubelet[1918]: I0715 11:34:36.087156 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 11:34:36.089754 kubelet[1918]: I0715 11:34:36.089712 1918 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 11:34:36.089754 kubelet[1918]: I0715 11:34:36.089754 1918 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 11:34:36.089832 kubelet[1918]: I0715 11:34:36.089771 1918 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 11:34:36.089832 kubelet[1918]: E0715 11:34:36.089822 1918 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 11:34:36.103287 kubelet[1918]: I0715 11:34:36.103264 1918 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 11:34:36.103287 kubelet[1918]: I0715 11:34:36.103283 1918 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 11:34:36.103415 kubelet[1918]: I0715 11:34:36.103299 1918 state_mem.go:36] "Initialized new in-memory state store" Jul 15 11:34:36.103487 kubelet[1918]: I0715 11:34:36.103431 1918 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 11:34:36.103487 kubelet[1918]: I0715 11:34:36.103442 1918 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 11:34:36.103487 kubelet[1918]: I0715 11:34:36.103458 1918 policy_none.go:49] "None policy: Start" Jul 15 11:34:36.104089 kubelet[1918]: I0715 11:34:36.104073 1918 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 11:34:36.104089 kubelet[1918]: I0715 11:34:36.104092 1918 state_mem.go:35] "Initializing new in-memory state store" Jul 15 11:34:36.104227 kubelet[1918]: I0715 11:34:36.104212 1918 state_mem.go:75] "Updated machine memory state" Jul 15 11:34:36.107386 kubelet[1918]: I0715 11:34:36.107363 1918 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 11:34:36.107528 kubelet[1918]: I0715 11:34:36.107505 1918 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 11:34:36.107572 kubelet[1918]: I0715 11:34:36.107522 1918 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 11:34:36.107662 kubelet[1918]: I0715 11:34:36.107643 1918 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 11:34:36.198102 kubelet[1918]: E0715 11:34:36.197978 1918 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:36.213919 kubelet[1918]: I0715 11:34:36.213902 1918 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 11:34:36.218700 kubelet[1918]: I0715 11:34:36.218661 1918 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 11:34:36.218763 kubelet[1918]: I0715 11:34:36.218731 1918 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 11:34:36.275974 kubelet[1918]: I0715 11:34:36.275936 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d55443fb259276b1a888e6655305e83c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d55443fb259276b1a888e6655305e83c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:36.275974 kubelet[1918]: I0715 11:34:36.275971 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:36.275974 kubelet[1918]: I0715 11:34:36.275986 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:36.276193 kubelet[1918]: I0715 11:34:36.276001 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 11:34:36.276193 kubelet[1918]: I0715 11:34:36.276013 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d55443fb259276b1a888e6655305e83c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55443fb259276b1a888e6655305e83c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:36.276193 kubelet[1918]: I0715 11:34:36.276025 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:36.276193 kubelet[1918]: I0715 11:34:36.276038 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:36.276193 kubelet[1918]: I0715 11:34:36.276052 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 11:34:36.276308 kubelet[1918]: I0715 11:34:36.276075 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d55443fb259276b1a888e6655305e83c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d55443fb259276b1a888e6655305e83c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:36.495149 kubelet[1918]: E0715 11:34:36.495033 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:36.498284 kubelet[1918]: E0715 11:34:36.498259 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:36.498393 kubelet[1918]: E0715 11:34:36.498348 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:36.764403 sudo[1953]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 11:34:36.764581 sudo[1953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 15 11:34:37.064750 kubelet[1918]: I0715 11:34:37.064614 1918 apiserver.go:52] "Watching apiserver" Jul 15 11:34:37.075700 kubelet[1918]: I0715 11:34:37.075654 1918 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 11:34:37.100287 kubelet[1918]: E0715 11:34:37.100254 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:37.100788 kubelet[1918]: E0715 11:34:37.100774 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:37.105983 kubelet[1918]: E0715 11:34:37.105952 1918 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 11:34:37.106123 kubelet[1918]: E0715 11:34:37.106104 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:37.126105 kubelet[1918]: I0715 11:34:37.125917 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.125897578 podStartE2EDuration="1.125897578s" podCreationTimestamp="2025-07-15 11:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:34:37.11937084 +0000 UTC m=+1.113621478" watchObservedRunningTime="2025-07-15 11:34:37.125897578 +0000 UTC m=+1.120148197" Jul 15 11:34:37.131688 kubelet[1918]: I0715 11:34:37.131598 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.131579784 podStartE2EDuration="3.131579784s" podCreationTimestamp="2025-07-15 11:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:34:37.126160961 +0000 UTC m=+1.120411579" watchObservedRunningTime="2025-07-15 11:34:37.131579784 +0000 UTC m=+1.125830402" Jul 15 11:34:37.131864 kubelet[1918]: I0715 11:34:37.131791 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.131787701 podStartE2EDuration="1.131787701s" podCreationTimestamp="2025-07-15 11:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:34:37.131695215 +0000 UTC m=+1.125945833" watchObservedRunningTime="2025-07-15 11:34:37.131787701 +0000 UTC m=+1.126038319" Jul 15 11:34:37.214907 sudo[1953]: pam_unix(sudo:session): session closed for user root Jul 15 11:34:38.101706 kubelet[1918]: E0715 11:34:38.101657 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:38.994549 sudo[1306]: pam_unix(sudo:session): session closed for user root Jul 15 11:34:38.995568 sshd[1303]: pam_unix(sshd:session): session closed for user core Jul 15 11:34:38.997927 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:59386.service: Deactivated successfully. Jul 15 11:34:38.998747 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 11:34:38.998880 systemd[1]: session-5.scope: Consumed 4.303s CPU time. Jul 15 11:34:38.999236 systemd-logind[1195]: Session 5 logged out. Waiting for processes to exit. Jul 15 11:34:38.999911 systemd-logind[1195]: Removed session 5. Jul 15 11:34:40.404068 kubelet[1918]: I0715 11:34:40.404030 1918 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 11:34:40.404505 env[1212]: time="2025-07-15T11:34:40.404310141Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 11:34:40.404675 kubelet[1918]: I0715 11:34:40.404541 1918 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 11:34:40.603181 kubelet[1918]: E0715 11:34:40.603148 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:40.956807 systemd[1]: Created slice kubepods-besteffort-pod84943c39_1c85_4ad5_acdc_e9f274d118d0.slice. Jul 15 11:34:40.970836 systemd[1]: Created slice kubepods-burstable-pod5b503fe9_4981_42e6_8af1_5bb6d5d11ce2.slice. Jul 15 11:34:41.006848 kubelet[1918]: I0715 11:34:41.006801 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-run\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.006848 kubelet[1918]: I0715 11:34:41.006851 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cni-path\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007093 kubelet[1918]: I0715 11:34:41.006872 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-xtables-lock\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007093 kubelet[1918]: I0715 11:34:41.006888 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-clustermesh-secrets\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007093 kubelet[1918]: I0715 11:34:41.006906 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hostproc\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007093 kubelet[1918]: I0715 11:34:41.006923 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdq46\" (UniqueName: \"kubernetes.io/projected/84943c39-1c85-4ad5-acdc-e9f274d118d0-kube-api-access-jdq46\") pod \"kube-proxy-6ps2w\" (UID: \"84943c39-1c85-4ad5-acdc-e9f274d118d0\") " pod="kube-system/kube-proxy-6ps2w" Jul 15 11:34:41.007093 kubelet[1918]: I0715 11:34:41.006939 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-lib-modules\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007093 kubelet[1918]: I0715 11:34:41.006953 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-bpf-maps\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007371 kubelet[1918]: I0715 11:34:41.006970 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-etc-cni-netd\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007371 kubelet[1918]: I0715 11:34:41.006986 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hubble-tls\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007371 kubelet[1918]: I0715 11:34:41.007008 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84943c39-1c85-4ad5-acdc-e9f274d118d0-kube-proxy\") pod \"kube-proxy-6ps2w\" (UID: \"84943c39-1c85-4ad5-acdc-e9f274d118d0\") " pod="kube-system/kube-proxy-6ps2w" Jul 15 11:34:41.007371 kubelet[1918]: I0715 11:34:41.007026 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84943c39-1c85-4ad5-acdc-e9f274d118d0-lib-modules\") pod \"kube-proxy-6ps2w\" (UID: \"84943c39-1c85-4ad5-acdc-e9f274d118d0\") " pod="kube-system/kube-proxy-6ps2w" Jul 15 11:34:41.007371 kubelet[1918]: I0715 11:34:41.007047 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-cgroup\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007371 kubelet[1918]: I0715 11:34:41.007085 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-net\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007541 kubelet[1918]: I0715 11:34:41.007111 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84943c39-1c85-4ad5-acdc-e9f274d118d0-xtables-lock\") pod \"kube-proxy-6ps2w\" (UID: \"84943c39-1c85-4ad5-acdc-e9f274d118d0\") " pod="kube-system/kube-proxy-6ps2w" Jul 15 11:34:41.007541 kubelet[1918]: I0715 11:34:41.007136 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-config-path\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007541 kubelet[1918]: I0715 11:34:41.007163 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-kernel\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.007541 kubelet[1918]: I0715 11:34:41.007181 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdtl\" (UniqueName: \"kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-kube-api-access-6xdtl\") pod \"cilium-jgs29\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " pod="kube-system/cilium-jgs29" Jul 15 11:34:41.108416 kubelet[1918]: I0715 11:34:41.108382 1918 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 15 11:34:41.118264 kubelet[1918]: E0715 11:34:41.118233 1918 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 11:34:41.118264 kubelet[1918]: E0715 11:34:41.118259 1918 projected.go:194] Error preparing data for projected volume kube-api-access-6xdtl for pod kube-system/cilium-jgs29: configmap "kube-root-ca.crt" not found Jul 15 11:34:41.118430 kubelet[1918]: E0715 11:34:41.118309 1918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-kube-api-access-6xdtl podName:5b503fe9-4981-42e6-8af1-5bb6d5d11ce2 nodeName:}" failed. No retries permitted until 2025-07-15 11:34:41.618290536 +0000 UTC m=+5.612541154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6xdtl" (UniqueName: "kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-kube-api-access-6xdtl") pod "cilium-jgs29" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2") : configmap "kube-root-ca.crt" not found Jul 15 11:34:41.118430 kubelet[1918]: E0715 11:34:41.118232 1918 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 15 11:34:41.118571 kubelet[1918]: E0715 11:34:41.118439 1918 projected.go:194] Error preparing data for projected volume kube-api-access-jdq46 for pod kube-system/kube-proxy-6ps2w: configmap "kube-root-ca.crt" not found Jul 15 11:34:41.118571 kubelet[1918]: E0715 11:34:41.118460 1918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84943c39-1c85-4ad5-acdc-e9f274d118d0-kube-api-access-jdq46 podName:84943c39-1c85-4ad5-acdc-e9f274d118d0 nodeName:}" failed. No retries permitted until 2025-07-15 11:34:41.618453575 +0000 UTC m=+5.612704193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jdq46" (UniqueName: "kubernetes.io/projected/84943c39-1c85-4ad5-acdc-e9f274d118d0-kube-api-access-jdq46") pod "kube-proxy-6ps2w" (UID: "84943c39-1c85-4ad5-acdc-e9f274d118d0") : configmap "kube-root-ca.crt" not found Jul 15 11:34:41.392964 systemd[1]: Created slice kubepods-besteffort-pod1d4fe80c_b571_4073_887c_590d0f7be1d0.slice. Jul 15 11:34:41.410987 kubelet[1918]: I0715 11:34:41.410932 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlgv\" (UniqueName: \"kubernetes.io/projected/1d4fe80c-b571-4073-887c-590d0f7be1d0-kube-api-access-7hlgv\") pod \"cilium-operator-5d85765b45-twnrz\" (UID: \"1d4fe80c-b571-4073-887c-590d0f7be1d0\") " pod="kube-system/cilium-operator-5d85765b45-twnrz" Jul 15 11:34:41.410987 kubelet[1918]: I0715 11:34:41.410970 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d4fe80c-b571-4073-887c-590d0f7be1d0-cilium-config-path\") pod \"cilium-operator-5d85765b45-twnrz\" (UID: \"1d4fe80c-b571-4073-887c-590d0f7be1d0\") " pod="kube-system/cilium-operator-5d85765b45-twnrz" Jul 15 11:34:41.696666 kubelet[1918]: E0715 11:34:41.696541 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:41.697208 env[1212]: time="2025-07-15T11:34:41.697156646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-twnrz,Uid:1d4fe80c-b571-4073-887c-590d0f7be1d0,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:41.870305 kubelet[1918]: E0715 11:34:41.870266 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:41.870843 env[1212]: time="2025-07-15T11:34:41.870786641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ps2w,Uid:84943c39-1c85-4ad5-acdc-e9f274d118d0,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:41.873594 kubelet[1918]: E0715 11:34:41.873542 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:41.873960 env[1212]: time="2025-07-15T11:34:41.873927211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgs29,Uid:5b503fe9-4981-42e6-8af1-5bb6d5d11ce2,Namespace:kube-system,Attempt:0,}" Jul 15 11:34:41.889178 env[1212]: time="2025-07-15T11:34:41.889022379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:41.889178 env[1212]: time="2025-07-15T11:34:41.889147326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:41.889364 env[1212]: time="2025-07-15T11:34:41.889216087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:41.889508 env[1212]: time="2025-07-15T11:34:41.889471071Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da pid=2011 runtime=io.containerd.runc.v2 Jul 15 11:34:41.900084 systemd[1]: Started cri-containerd-f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da.scope. Jul 15 11:34:41.908760 env[1212]: time="2025-07-15T11:34:41.908669299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.908916960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.908946976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.909144361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2ec87ee2bbd5d0c2f68c69739eb53b1cacc129733d8bbeaadb519ec5cfe69dc pid=2038 runtime=io.containerd.runc.v2 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.913536141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.913563684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.913587108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:34:41.948142 env[1212]: time="2025-07-15T11:34:41.913724018Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408 pid=2060 runtime=io.containerd.runc.v2 Jul 15 11:34:41.933893 systemd[1]: Started cri-containerd-7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408.scope. Jul 15 11:34:41.945360 systemd[1]: Started cri-containerd-a2ec87ee2bbd5d0c2f68c69739eb53b1cacc129733d8bbeaadb519ec5cfe69dc.scope. Jul 15 11:34:41.950804 env[1212]: time="2025-07-15T11:34:41.950736672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-twnrz,Uid:1d4fe80c-b571-4073-887c-590d0f7be1d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da\"" Jul 15 11:34:41.952466 kubelet[1918]: E0715 11:34:41.951607 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:41.956172 env[1212]: time="2025-07-15T11:34:41.956136147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 11:34:41.964565 env[1212]: time="2025-07-15T11:34:41.964512220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgs29,Uid:5b503fe9-4981-42e6-8af1-5bb6d5d11ce2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\"" Jul 15 11:34:41.965645 kubelet[1918]: E0715 11:34:41.965394 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:41.971106 env[1212]: time="2025-07-15T11:34:41.971058285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ps2w,Uid:84943c39-1c85-4ad5-acdc-e9f274d118d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2ec87ee2bbd5d0c2f68c69739eb53b1cacc129733d8bbeaadb519ec5cfe69dc\"" Jul 15 11:34:41.971644 kubelet[1918]: E0715 11:34:41.971526 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:41.972975 env[1212]: time="2025-07-15T11:34:41.972940283Z" level=info msg="CreateContainer within sandbox \"a2ec87ee2bbd5d0c2f68c69739eb53b1cacc129733d8bbeaadb519ec5cfe69dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 11:34:42.115772 env[1212]: time="2025-07-15T11:34:42.115648961Z" level=info msg="CreateContainer within sandbox \"a2ec87ee2bbd5d0c2f68c69739eb53b1cacc129733d8bbeaadb519ec5cfe69dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3a7797c64490602f542f1fce8d06422670a9cbd823b18c4e419ab7944a1806b\"" Jul 15 11:34:42.116264 env[1212]: time="2025-07-15T11:34:42.116231999Z" level=info msg="StartContainer for \"d3a7797c64490602f542f1fce8d06422670a9cbd823b18c4e419ab7944a1806b\"" Jul 15 11:34:42.132155 systemd[1]: Started cri-containerd-d3a7797c64490602f542f1fce8d06422670a9cbd823b18c4e419ab7944a1806b.scope. Jul 15 11:34:42.156237 env[1212]: time="2025-07-15T11:34:42.156184867Z" level=info msg="StartContainer for \"d3a7797c64490602f542f1fce8d06422670a9cbd823b18c4e419ab7944a1806b\" returns successfully" Jul 15 11:34:43.112987 kubelet[1918]: E0715 11:34:43.112949 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:43.152008 kubelet[1918]: E0715 11:34:43.151660 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:43.340841 kubelet[1918]: I0715 11:34:43.340774 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6ps2w" podStartSLOduration=3.3407414859999998 podStartE2EDuration="3.340741486s" podCreationTimestamp="2025-07-15 11:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:34:43.340730745 +0000 UTC m=+7.334981363" watchObservedRunningTime="2025-07-15 11:34:43.340741486 +0000 UTC m=+7.334992104" Jul 15 11:34:43.526398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937007719.mount: Deactivated successfully. Jul 15 11:34:44.113931 kubelet[1918]: E0715 11:34:44.113895 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:44.114258 kubelet[1918]: E0715 11:34:44.114112 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:45.131015 env[1212]: time="2025-07-15T11:34:45.130950013Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:45.132930 env[1212]: time="2025-07-15T11:34:45.132877075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:45.134572 env[1212]: time="2025-07-15T11:34:45.134541870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:45.135070 env[1212]: time="2025-07-15T11:34:45.135046366Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 11:34:45.136097 env[1212]: time="2025-07-15T11:34:45.136071890Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 11:34:45.137125 env[1212]: time="2025-07-15T11:34:45.137089659Z" level=info msg="CreateContainer within sandbox \"f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 11:34:45.149261 env[1212]: time="2025-07-15T11:34:45.149232031Z" level=info msg="CreateContainer within sandbox \"f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\"" Jul 15 11:34:45.149651 env[1212]: time="2025-07-15T11:34:45.149600358Z" level=info msg="StartContainer for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\"" Jul 15 11:34:45.153947 kubelet[1918]: E0715 11:34:45.153918 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:45.169010 systemd[1]: Started cri-containerd-fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273.scope. Jul 15 11:34:45.321275 env[1212]: time="2025-07-15T11:34:45.321205205Z" level=info msg="StartContainer for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" returns successfully" Jul 15 11:34:45.930465 update_engine[1201]: I0715 11:34:45.930423 1201 update_attempter.cc:509] Updating boot flags... Jul 15 11:34:46.118643 kubelet[1918]: E0715 11:34:46.118577 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:46.118885 kubelet[1918]: E0715 11:34:46.118840 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:47.122019 kubelet[1918]: E0715 11:34:47.121970 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:50.608130 kubelet[1918]: E0715 11:34:50.608083 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:50.870587 kubelet[1918]: I0715 11:34:50.870535 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-twnrz" podStartSLOduration=6.68802796 podStartE2EDuration="9.870520424s" podCreationTimestamp="2025-07-15 11:34:41 +0000 UTC" firstStartedPulling="2025-07-15 11:34:41.953395357 +0000 UTC m=+5.947645965" lastFinishedPulling="2025-07-15 11:34:45.135887811 +0000 UTC m=+9.130138429" observedRunningTime="2025-07-15 11:34:46.138116018 +0000 UTC m=+10.132366636" watchObservedRunningTime="2025-07-15 11:34:50.870520424 +0000 UTC m=+14.864771042" Jul 15 11:34:52.924568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732218968.mount: Deactivated successfully. Jul 15 11:34:57.702129 env[1212]: time="2025-07-15T11:34:57.702050840Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:57.705293 env[1212]: time="2025-07-15T11:34:57.705076732Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:57.708977 env[1212]: time="2025-07-15T11:34:57.708908973Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 11:34:57.709709 env[1212]: time="2025-07-15T11:34:57.709660519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 15 11:34:57.716794 env[1212]: time="2025-07-15T11:34:57.716751962Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:34:57.888134 env[1212]: time="2025-07-15T11:34:57.888052240Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\"" Jul 15 11:34:57.889383 env[1212]: time="2025-07-15T11:34:57.889336941Z" level=info msg="StartContainer for \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\"" Jul 15 11:34:57.905612 systemd[1]: Started cri-containerd-150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1.scope. Jul 15 11:34:57.936771 systemd[1]: cri-containerd-150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1.scope: Deactivated successfully. Jul 15 11:34:57.982094 env[1212]: time="2025-07-15T11:34:57.981676975Z" level=info msg="StartContainer for \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\" returns successfully" Jul 15 11:34:58.494882 kubelet[1918]: E0715 11:34:58.494837 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:58.628321 env[1212]: time="2025-07-15T11:34:58.628254560Z" level=info msg="shim disconnected" id=150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1 Jul 15 11:34:58.628321 env[1212]: time="2025-07-15T11:34:58.628305346Z" level=warning msg="cleaning up after shim disconnected" id=150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1 namespace=k8s.io Jul 15 11:34:58.628321 env[1212]: time="2025-07-15T11:34:58.628316086Z" level=info msg="cleaning up dead shim" Jul 15 11:34:58.634034 env[1212]: time="2025-07-15T11:34:58.633993861Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:34:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2394 runtime=io.containerd.runc.v2\n" Jul 15 11:34:58.726173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1-rootfs.mount: Deactivated successfully. Jul 15 11:34:59.496898 kubelet[1918]: E0715 11:34:59.496865 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:34:59.498454 env[1212]: time="2025-07-15T11:34:59.498411039Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:34:59.644095 env[1212]: time="2025-07-15T11:34:59.644035773Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\"" Jul 15 11:34:59.644701 env[1212]: time="2025-07-15T11:34:59.644639630Z" level=info msg="StartContainer for \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\"" Jul 15 11:34:59.664009 systemd[1]: Started cri-containerd-144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46.scope. Jul 15 11:34:59.686510 env[1212]: time="2025-07-15T11:34:59.684715200Z" level=info msg="StartContainer for \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\" returns successfully" Jul 15 11:34:59.694225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 11:34:59.694438 systemd[1]: Stopped systemd-sysctl.service. Jul 15 11:34:59.694614 systemd[1]: Stopping systemd-sysctl.service... Jul 15 11:34:59.695951 systemd[1]: Starting systemd-sysctl.service... Jul 15 11:34:59.696161 systemd[1]: cri-containerd-144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46.scope: Deactivated successfully. Jul 15 11:34:59.705984 systemd[1]: Finished systemd-sysctl.service. Jul 15 11:34:59.718361 env[1212]: time="2025-07-15T11:34:59.718316897Z" level=info msg="shim disconnected" id=144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46 Jul 15 11:34:59.718361 env[1212]: time="2025-07-15T11:34:59.718356451Z" level=warning msg="cleaning up after shim disconnected" id=144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46 namespace=k8s.io Jul 15 11:34:59.718512 env[1212]: time="2025-07-15T11:34:59.718364957Z" level=info msg="cleaning up dead shim" Jul 15 11:34:59.726182 systemd[1]: run-containerd-runc-k8s.io-144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46-runc.xhQ7Fk.mount: Deactivated successfully. Jul 15 11:34:59.726295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46-rootfs.mount: Deactivated successfully. Jul 15 11:34:59.727169 env[1212]: time="2025-07-15T11:34:59.727142086Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:34:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Jul 15 11:35:00.246120 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:41362.service. Jul 15 11:35:00.288124 sshd[2471]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:00.289341 sshd[2471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:00.292615 systemd-logind[1195]: New session 6 of user core. Jul 15 11:35:00.293332 systemd[1]: Started session-6.scope. Jul 15 11:35:00.406363 sshd[2471]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:00.408364 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:41362.service: Deactivated successfully. Jul 15 11:35:00.409178 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 11:35:00.409750 systemd-logind[1195]: Session 6 logged out. Waiting for processes to exit. Jul 15 11:35:00.410405 systemd-logind[1195]: Removed session 6. Jul 15 11:35:00.500080 kubelet[1918]: E0715 11:35:00.499948 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:00.501846 env[1212]: time="2025-07-15T11:35:00.501776083Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:35:00.518865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158747357.mount: Deactivated successfully. Jul 15 11:35:00.522296 env[1212]: time="2025-07-15T11:35:00.522257022Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\"" Jul 15 11:35:00.522753 env[1212]: time="2025-07-15T11:35:00.522729983Z" level=info msg="StartContainer for \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\"" Jul 15 11:35:00.536499 systemd[1]: Started cri-containerd-1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e.scope. Jul 15 11:35:00.562975 systemd[1]: cri-containerd-1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e.scope: Deactivated successfully. Jul 15 11:35:00.565530 env[1212]: time="2025-07-15T11:35:00.565464126Z" level=info msg="StartContainer for \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\" returns successfully" Jul 15 11:35:00.592083 env[1212]: time="2025-07-15T11:35:00.592022466Z" level=info msg="shim disconnected" id=1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e Jul 15 11:35:00.592083 env[1212]: time="2025-07-15T11:35:00.592063082Z" level=warning msg="cleaning up after shim disconnected" id=1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e namespace=k8s.io Jul 15 11:35:00.592083 env[1212]: time="2025-07-15T11:35:00.592072049Z" level=info msg="cleaning up dead shim" Jul 15 11:35:00.597842 env[1212]: time="2025-07-15T11:35:00.597820702Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2529 runtime=io.containerd.runc.v2\n" Jul 15 11:35:00.726001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e-rootfs.mount: Deactivated successfully. Jul 15 11:35:01.503924 kubelet[1918]: E0715 11:35:01.503890 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:01.506078 env[1212]: time="2025-07-15T11:35:01.505914966Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:35:01.525095 env[1212]: time="2025-07-15T11:35:01.525029633Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\"" Jul 15 11:35:01.525594 env[1212]: time="2025-07-15T11:35:01.525557457Z" level=info msg="StartContainer for \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\"" Jul 15 11:35:01.541192 systemd[1]: Started cri-containerd-107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112.scope. Jul 15 11:35:01.562710 systemd[1]: cri-containerd-107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112.scope: Deactivated successfully. Jul 15 11:35:01.564256 env[1212]: time="2025-07-15T11:35:01.564214095Z" level=info msg="StartContainer for \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\" returns successfully" Jul 15 11:35:01.582570 env[1212]: time="2025-07-15T11:35:01.582517666Z" level=info msg="shim disconnected" id=107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112 Jul 15 11:35:01.582570 env[1212]: time="2025-07-15T11:35:01.582563051Z" level=warning msg="cleaning up after shim disconnected" id=107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112 namespace=k8s.io Jul 15 11:35:01.582570 env[1212]: time="2025-07-15T11:35:01.582571537Z" level=info msg="cleaning up dead shim" Jul 15 11:35:01.589197 env[1212]: time="2025-07-15T11:35:01.589139960Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2584 runtime=io.containerd.runc.v2\n" Jul 15 11:35:01.725924 systemd[1]: run-containerd-runc-k8s.io-107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112-runc.cvoNwY.mount: Deactivated successfully. Jul 15 11:35:01.726012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112-rootfs.mount: Deactivated successfully. Jul 15 11:35:02.507504 kubelet[1918]: E0715 11:35:02.507433 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:02.510402 env[1212]: time="2025-07-15T11:35:02.508949967Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:35:02.529242 env[1212]: time="2025-07-15T11:35:02.529188951Z" level=info msg="CreateContainer within sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\"" Jul 15 11:35:02.529856 env[1212]: time="2025-07-15T11:35:02.529828886Z" level=info msg="StartContainer for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\"" Jul 15 11:35:02.545986 systemd[1]: Started cri-containerd-b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89.scope. Jul 15 11:35:02.577153 env[1212]: time="2025-07-15T11:35:02.577091070Z" level=info msg="StartContainer for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" returns successfully" Jul 15 11:35:02.673956 kubelet[1918]: I0715 11:35:02.673922 1918 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 11:35:02.703814 systemd[1]: Created slice kubepods-burstable-pod54d04236_4bef_4741_a8d6_54bfcc53f5c7.slice. Jul 15 11:35:02.707717 systemd[1]: Created slice kubepods-burstable-pod0dfaefc4_9dc1_4505_b3fa_9bd29f6b487b.slice. Jul 15 11:35:02.714308 kubelet[1918]: I0715 11:35:02.714279 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d04236-4bef-4741-a8d6-54bfcc53f5c7-config-volume\") pod \"coredns-7c65d6cfc9-gzvhx\" (UID: \"54d04236-4bef-4741-a8d6-54bfcc53f5c7\") " pod="kube-system/coredns-7c65d6cfc9-gzvhx" Jul 15 11:35:02.714308 kubelet[1918]: I0715 11:35:02.714309 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4qwj\" (UniqueName: \"kubernetes.io/projected/0dfaefc4-9dc1-4505-b3fa-9bd29f6b487b-kube-api-access-x4qwj\") pod \"coredns-7c65d6cfc9-l6hv7\" (UID: \"0dfaefc4-9dc1-4505-b3fa-9bd29f6b487b\") " pod="kube-system/coredns-7c65d6cfc9-l6hv7" Jul 15 11:35:02.714435 kubelet[1918]: I0715 11:35:02.714325 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktxw\" (UniqueName: \"kubernetes.io/projected/54d04236-4bef-4741-a8d6-54bfcc53f5c7-kube-api-access-qktxw\") pod \"coredns-7c65d6cfc9-gzvhx\" (UID: \"54d04236-4bef-4741-a8d6-54bfcc53f5c7\") " pod="kube-system/coredns-7c65d6cfc9-gzvhx" Jul 15 11:35:02.714435 kubelet[1918]: I0715 11:35:02.714338 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dfaefc4-9dc1-4505-b3fa-9bd29f6b487b-config-volume\") pod \"coredns-7c65d6cfc9-l6hv7\" (UID: \"0dfaefc4-9dc1-4505-b3fa-9bd29f6b487b\") " pod="kube-system/coredns-7c65d6cfc9-l6hv7" Jul 15 11:35:03.008558 kubelet[1918]: E0715 11:35:03.008470 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:03.010645 kubelet[1918]: E0715 11:35:03.010601 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:03.011104 env[1212]: time="2025-07-15T11:35:03.011055307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzvhx,Uid:54d04236-4bef-4741-a8d6-54bfcc53f5c7,Namespace:kube-system,Attempt:0,}" Jul 15 11:35:03.011738 env[1212]: time="2025-07-15T11:35:03.011711100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6hv7,Uid:0dfaefc4-9dc1-4505-b3fa-9bd29f6b487b,Namespace:kube-system,Attempt:0,}" Jul 15 11:35:03.513970 kubelet[1918]: E0715 11:35:03.513943 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:04.515212 kubelet[1918]: E0715 11:35:04.515169 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:04.700558 systemd-networkd[1033]: cilium_host: Link UP Jul 15 11:35:04.700676 systemd-networkd[1033]: cilium_net: Link UP Jul 15 11:35:04.703113 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 15 11:35:04.703164 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 15 11:35:04.703403 systemd-networkd[1033]: cilium_net: Gained carrier Jul 15 11:35:04.703564 systemd-networkd[1033]: cilium_host: Gained carrier Jul 15 11:35:04.712771 systemd-networkd[1033]: cilium_net: Gained IPv6LL Jul 15 11:35:04.775949 systemd-networkd[1033]: cilium_vxlan: Link UP Jul 15 11:35:04.775956 systemd-networkd[1033]: cilium_vxlan: Gained carrier Jul 15 11:35:04.967723 kernel: NET: Registered PF_ALG protocol family Jul 15 11:35:05.410004 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:41372.service. Jul 15 11:35:05.454468 sshd[3037]: Accepted publickey for core from 10.0.0.1 port 41372 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:05.456110 sshd[3037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:05.460208 systemd-logind[1195]: New session 7 of user core. Jul 15 11:35:05.461059 systemd[1]: Started session-7.scope. Jul 15 11:35:05.516356 kubelet[1918]: E0715 11:35:05.516305 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:05.538217 systemd-networkd[1033]: lxc_health: Link UP Jul 15 11:35:05.554583 systemd-networkd[1033]: lxc_health: Gained carrier Jul 15 11:35:05.554779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:35:05.614152 sshd[3037]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:05.617215 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:41372.service: Deactivated successfully. Jul 15 11:35:05.618009 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 11:35:05.618658 systemd-logind[1195]: Session 7 logged out. Waiting for processes to exit. Jul 15 11:35:05.619507 systemd-logind[1195]: Removed session 7. Jul 15 11:35:05.710872 systemd-networkd[1033]: cilium_host: Gained IPv6LL Jul 15 11:35:05.868291 systemd-networkd[1033]: lxc2b541cfc1f50: Link UP Jul 15 11:35:05.874717 kernel: eth0: renamed from tmpfc240 Jul 15 11:35:05.887962 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 15 11:35:05.888090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2b541cfc1f50: link becomes ready Jul 15 11:35:05.888521 systemd-networkd[1033]: lxc2b541cfc1f50: Gained carrier Jul 15 11:35:05.893900 kubelet[1918]: I0715 11:35:05.893009 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jgs29" podStartSLOduration=10.146930762 podStartE2EDuration="25.892986835s" podCreationTimestamp="2025-07-15 11:34:40 +0000 UTC" firstStartedPulling="2025-07-15 11:34:41.966087064 +0000 UTC m=+5.960337692" lastFinishedPulling="2025-07-15 11:34:57.712143147 +0000 UTC m=+21.706393765" observedRunningTime="2025-07-15 11:35:03.539524258 +0000 UTC m=+27.533774876" watchObservedRunningTime="2025-07-15 11:35:05.892986835 +0000 UTC m=+29.887237453" Jul 15 11:35:05.901603 systemd-networkd[1033]: lxc88147addb24b: Link UP Jul 15 11:35:05.913723 kernel: eth0: renamed from tmp27e00 Jul 15 11:35:05.923426 systemd-networkd[1033]: lxc88147addb24b: Gained carrier Jul 15 11:35:05.923775 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc88147addb24b: link becomes ready Jul 15 11:35:06.414860 systemd-networkd[1033]: cilium_vxlan: Gained IPv6LL Jul 15 11:35:06.517732 kubelet[1918]: E0715 11:35:06.517704 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:06.670859 systemd-networkd[1033]: lxc_health: Gained IPv6LL Jul 15 11:35:06.990813 systemd-networkd[1033]: lxc88147addb24b: Gained IPv6LL Jul 15 11:35:07.310914 systemd-networkd[1033]: lxc2b541cfc1f50: Gained IPv6LL Jul 15 11:35:07.879803 kubelet[1918]: I0715 11:35:07.879744 1918 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 11:35:07.880368 kubelet[1918]: E0715 11:35:07.880332 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:08.522069 kubelet[1918]: E0715 11:35:08.522040 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:09.371470 env[1212]: time="2025-07-15T11:35:09.371386178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:35:09.371470 env[1212]: time="2025-07-15T11:35:09.371427637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:35:09.371470 env[1212]: time="2025-07-15T11:35:09.371437515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:35:09.373603 env[1212]: time="2025-07-15T11:35:09.373427314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:35:09.373603 env[1212]: time="2025-07-15T11:35:09.373471126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:35:09.373603 env[1212]: time="2025-07-15T11:35:09.373484421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:35:09.373745 env[1212]: time="2025-07-15T11:35:09.373698003Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc24035f0445378388d0d6bf3836cf4bd8be704c9b3f0dd960938cf6675d6a3f pid=3182 runtime=io.containerd.runc.v2 Jul 15 11:35:09.374061 env[1212]: time="2025-07-15T11:35:09.372803040Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27e00372d25a4ad81a1bd6dc36fa41e070e4fbadb28dddbda3aeef56426c0110 pid=3175 runtime=io.containerd.runc.v2 Jul 15 11:35:09.392121 systemd[1]: run-containerd-runc-k8s.io-27e00372d25a4ad81a1bd6dc36fa41e070e4fbadb28dddbda3aeef56426c0110-runc.jmPUv7.mount: Deactivated successfully. Jul 15 11:35:09.395866 systemd[1]: Started cri-containerd-27e00372d25a4ad81a1bd6dc36fa41e070e4fbadb28dddbda3aeef56426c0110.scope. Jul 15 11:35:09.396883 systemd[1]: Started cri-containerd-fc24035f0445378388d0d6bf3836cf4bd8be704c9b3f0dd960938cf6675d6a3f.scope. Jul 15 11:35:09.412204 systemd-resolved[1148]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:35:09.413532 systemd-resolved[1148]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 11:35:09.434652 env[1212]: time="2025-07-15T11:35:09.434589215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l6hv7,Uid:0dfaefc4-9dc1-4505-b3fa-9bd29f6b487b,Namespace:kube-system,Attempt:0,} returns sandbox id \"27e00372d25a4ad81a1bd6dc36fa41e070e4fbadb28dddbda3aeef56426c0110\"" Jul 15 11:35:09.435174 kubelet[1918]: E0715 11:35:09.435140 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:09.437381 env[1212]: time="2025-07-15T11:35:09.437342749Z" level=info msg="CreateContainer within sandbox \"27e00372d25a4ad81a1bd6dc36fa41e070e4fbadb28dddbda3aeef56426c0110\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:35:09.443963 env[1212]: time="2025-07-15T11:35:09.443912104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzvhx,Uid:54d04236-4bef-4741-a8d6-54bfcc53f5c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc24035f0445378388d0d6bf3836cf4bd8be704c9b3f0dd960938cf6675d6a3f\"" Jul 15 11:35:09.444546 kubelet[1918]: E0715 11:35:09.444521 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:09.445837 env[1212]: time="2025-07-15T11:35:09.445811715Z" level=info msg="CreateContainer within sandbox \"fc24035f0445378388d0d6bf3836cf4bd8be704c9b3f0dd960938cf6675d6a3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 11:35:09.617970 env[1212]: time="2025-07-15T11:35:09.617895097Z" level=info msg="CreateContainer within sandbox \"fc24035f0445378388d0d6bf3836cf4bd8be704c9b3f0dd960938cf6675d6a3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35fc4056ecd3e591b3b12d488715b429cd2602e3140f1bd9f12496f526324172\"" Jul 15 11:35:09.618515 env[1212]: time="2025-07-15T11:35:09.618477171Z" level=info msg="StartContainer for \"35fc4056ecd3e591b3b12d488715b429cd2602e3140f1bd9f12496f526324172\"" Jul 15 11:35:09.630805 env[1212]: time="2025-07-15T11:35:09.629935966Z" level=info msg="CreateContainer within sandbox \"27e00372d25a4ad81a1bd6dc36fa41e070e4fbadb28dddbda3aeef56426c0110\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a29b74038cf4003a6802a34a571486ddfe13c74e6a4a304d0dcc24528747cfe\"" Jul 15 11:35:09.631230 env[1212]: time="2025-07-15T11:35:09.631188679Z" level=info msg="StartContainer for \"4a29b74038cf4003a6802a34a571486ddfe13c74e6a4a304d0dcc24528747cfe\"" Jul 15 11:35:09.661262 systemd[1]: Started cri-containerd-35fc4056ecd3e591b3b12d488715b429cd2602e3140f1bd9f12496f526324172.scope. Jul 15 11:35:09.669013 systemd[1]: Started cri-containerd-4a29b74038cf4003a6802a34a571486ddfe13c74e6a4a304d0dcc24528747cfe.scope. Jul 15 11:35:09.689215 env[1212]: time="2025-07-15T11:35:09.689157960Z" level=info msg="StartContainer for \"35fc4056ecd3e591b3b12d488715b429cd2602e3140f1bd9f12496f526324172\" returns successfully" Jul 15 11:35:09.695514 env[1212]: time="2025-07-15T11:35:09.695476374Z" level=info msg="StartContainer for \"4a29b74038cf4003a6802a34a571486ddfe13c74e6a4a304d0dcc24528747cfe\" returns successfully" Jul 15 11:35:10.528957 kubelet[1918]: E0715 11:35:10.528918 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:10.530175 kubelet[1918]: E0715 11:35:10.530136 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:10.619412 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:52364.service. Jul 15 11:35:10.641891 kubelet[1918]: I0715 11:35:10.641831 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-l6hv7" podStartSLOduration=29.64181624 podStartE2EDuration="29.64181624s" podCreationTimestamp="2025-07-15 11:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:35:10.641443801 +0000 UTC m=+34.635694409" watchObservedRunningTime="2025-07-15 11:35:10.64181624 +0000 UTC m=+34.636066858" Jul 15 11:35:10.660920 sshd[3328]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:10.662104 sshd[3328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:10.665330 systemd-logind[1195]: New session 8 of user core. Jul 15 11:35:10.666060 systemd[1]: Started session-8.scope. Jul 15 11:35:10.851052 sshd[3328]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:10.853737 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:52364.service: Deactivated successfully. Jul 15 11:35:10.854447 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 11:35:10.855003 systemd-logind[1195]: Session 8 logged out. Waiting for processes to exit. Jul 15 11:35:10.855744 systemd-logind[1195]: Removed session 8. Jul 15 11:35:11.025853 kubelet[1918]: I0715 11:35:11.025788 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gzvhx" podStartSLOduration=30.025767995 podStartE2EDuration="30.025767995s" podCreationTimestamp="2025-07-15 11:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:35:10.84564667 +0000 UTC m=+34.839897289" watchObservedRunningTime="2025-07-15 11:35:11.025767995 +0000 UTC m=+35.020018613" Jul 15 11:35:11.531586 kubelet[1918]: E0715 11:35:11.531556 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:11.531958 kubelet[1918]: E0715 11:35:11.531599 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:12.533783 kubelet[1918]: E0715 11:35:12.533756 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:12.534147 kubelet[1918]: E0715 11:35:12.533889 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:15.855302 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:52370.service. Jul 15 11:35:15.894592 sshd[3353]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:15.895498 sshd[3353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:15.898694 systemd-logind[1195]: New session 9 of user core. Jul 15 11:35:15.899613 systemd[1]: Started session-9.scope. Jul 15 11:35:16.006927 sshd[3353]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:16.009311 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:52370.service: Deactivated successfully. Jul 15 11:35:16.010061 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 11:35:16.010931 systemd-logind[1195]: Session 9 logged out. Waiting for processes to exit. Jul 15 11:35:16.011735 systemd-logind[1195]: Removed session 9. Jul 15 11:35:21.010456 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:43310.service. Jul 15 11:35:21.056967 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 43310 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:21.058128 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:21.061371 systemd-logind[1195]: New session 10 of user core. Jul 15 11:35:21.062112 systemd[1]: Started session-10.scope. Jul 15 11:35:21.170628 sshd[3367]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:21.173884 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:43310.service: Deactivated successfully. Jul 15 11:35:21.174462 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 11:35:21.175038 systemd-logind[1195]: Session 10 logged out. Waiting for processes to exit. Jul 15 11:35:21.176078 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:43312.service. Jul 15 11:35:21.177064 systemd-logind[1195]: Removed session 10. Jul 15 11:35:21.216431 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 43312 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:21.217582 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:21.221374 systemd-logind[1195]: New session 11 of user core. Jul 15 11:35:21.222323 systemd[1]: Started session-11.scope. Jul 15 11:35:21.380233 sshd[3381]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:21.382953 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:43312.service: Deactivated successfully. Jul 15 11:35:21.383467 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 11:35:21.385757 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:43322.service. Jul 15 11:35:21.386146 systemd-logind[1195]: Session 11 logged out. Waiting for processes to exit. Jul 15 11:35:21.388286 systemd-logind[1195]: Removed session 11. Jul 15 11:35:21.434890 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 43322 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:21.436179 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:21.439748 systemd-logind[1195]: New session 12 of user core. Jul 15 11:35:21.440460 systemd[1]: Started session-12.scope. Jul 15 11:35:21.559829 sshd[3393]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:21.562761 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:43322.service: Deactivated successfully. Jul 15 11:35:21.563494 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 11:35:21.564222 systemd-logind[1195]: Session 12 logged out. Waiting for processes to exit. Jul 15 11:35:21.564923 systemd-logind[1195]: Removed session 12. Jul 15 11:35:26.563652 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:43324.service. Jul 15 11:35:26.603168 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 43324 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:26.604233 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:26.607856 systemd-logind[1195]: New session 13 of user core. Jul 15 11:35:26.608939 systemd[1]: Started session-13.scope. Jul 15 11:35:26.714651 sshd[3406]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:26.716568 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:43324.service: Deactivated successfully. Jul 15 11:35:26.717233 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 11:35:26.717987 systemd-logind[1195]: Session 13 logged out. Waiting for processes to exit. Jul 15 11:35:26.718588 systemd-logind[1195]: Removed session 13. Jul 15 11:35:31.718878 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:54156.service. Jul 15 11:35:31.759553 sshd[3419]: Accepted publickey for core from 10.0.0.1 port 54156 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:31.760651 sshd[3419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:31.763722 systemd-logind[1195]: New session 14 of user core. Jul 15 11:35:31.764637 systemd[1]: Started session-14.scope. Jul 15 11:35:31.868337 sshd[3419]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:31.870818 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:54156.service: Deactivated successfully. Jul 15 11:35:31.871399 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 11:35:31.871951 systemd-logind[1195]: Session 14 logged out. Waiting for processes to exit. Jul 15 11:35:31.873182 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:54164.service. Jul 15 11:35:31.874247 systemd-logind[1195]: Removed session 14. Jul 15 11:35:31.912412 sshd[3432]: Accepted publickey for core from 10.0.0.1 port 54164 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:31.913700 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:31.917298 systemd-logind[1195]: New session 15 of user core. Jul 15 11:35:31.918115 systemd[1]: Started session-15.scope. Jul 15 11:35:32.150310 sshd[3432]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:32.153497 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:54164.service: Deactivated successfully. Jul 15 11:35:32.154079 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 11:35:32.154828 systemd-logind[1195]: Session 15 logged out. Waiting for processes to exit. Jul 15 11:35:32.156097 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:54176.service. Jul 15 11:35:32.156841 systemd-logind[1195]: Removed session 15. Jul 15 11:35:32.197990 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 54176 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:32.199114 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:32.202419 systemd-logind[1195]: New session 16 of user core. Jul 15 11:35:32.203181 systemd[1]: Started session-16.scope. Jul 15 11:35:33.670621 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:54182.service. Jul 15 11:35:33.671526 sshd[3443]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:33.673670 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:54176.service: Deactivated successfully. Jul 15 11:35:33.674370 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 11:35:33.675821 systemd-logind[1195]: Session 16 logged out. Waiting for processes to exit. Jul 15 11:35:33.676518 systemd-logind[1195]: Removed session 16. Jul 15 11:35:33.718572 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 54182 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:33.719665 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:33.722878 systemd-logind[1195]: New session 17 of user core. Jul 15 11:35:33.723607 systemd[1]: Started session-17.scope. Jul 15 11:35:33.939904 sshd[3461]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:33.943053 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:54190.service. Jul 15 11:35:33.944805 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:54182.service: Deactivated successfully. Jul 15 11:35:33.945434 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 11:35:33.946151 systemd-logind[1195]: Session 17 logged out. Waiting for processes to exit. Jul 15 11:35:33.947181 systemd-logind[1195]: Removed session 17. Jul 15 11:35:33.984973 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 54190 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:33.986164 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:33.989647 systemd-logind[1195]: New session 18 of user core. Jul 15 11:35:33.990366 systemd[1]: Started session-18.scope. Jul 15 11:35:34.094435 sshd[3474]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:34.096940 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:54190.service: Deactivated successfully. Jul 15 11:35:34.097757 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 11:35:34.098278 systemd-logind[1195]: Session 18 logged out. Waiting for processes to exit. Jul 15 11:35:34.098912 systemd-logind[1195]: Removed session 18. Jul 15 11:35:39.099029 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:54202.service. Jul 15 11:35:39.138622 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 54202 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:39.139907 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:39.142692 systemd-logind[1195]: New session 19 of user core. Jul 15 11:35:39.143367 systemd[1]: Started session-19.scope. Jul 15 11:35:39.249372 sshd[3490]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:39.252044 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:54202.service: Deactivated successfully. Jul 15 11:35:39.252933 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 11:35:39.253556 systemd-logind[1195]: Session 19 logged out. Waiting for processes to exit. Jul 15 11:35:39.254374 systemd-logind[1195]: Removed session 19. Jul 15 11:35:44.254082 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:57236.service. Jul 15 11:35:44.294953 sshd[3509]: Accepted publickey for core from 10.0.0.1 port 57236 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:44.296023 sshd[3509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:44.299501 systemd-logind[1195]: New session 20 of user core. Jul 15 11:35:44.300526 systemd[1]: Started session-20.scope. Jul 15 11:35:44.400213 sshd[3509]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:44.402211 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:57236.service: Deactivated successfully. Jul 15 11:35:44.402931 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 11:35:44.403757 systemd-logind[1195]: Session 20 logged out. Waiting for processes to exit. Jul 15 11:35:44.404519 systemd-logind[1195]: Removed session 20. Jul 15 11:35:49.090523 kubelet[1918]: E0715 11:35:49.090467 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:49.404044 systemd[1]: Started sshd@20-10.0.0.101:22-10.0.0.1:57244.service. Jul 15 11:35:49.443323 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 57244 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:49.444479 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:49.447721 systemd-logind[1195]: New session 21 of user core. Jul 15 11:35:49.448455 systemd[1]: Started session-21.scope. Jul 15 11:35:49.564964 sshd[3522]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:49.567369 systemd[1]: sshd@20-10.0.0.101:22-10.0.0.1:57244.service: Deactivated successfully. Jul 15 11:35:49.568022 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 11:35:49.568730 systemd-logind[1195]: Session 21 logged out. Waiting for processes to exit. Jul 15 11:35:49.569300 systemd-logind[1195]: Removed session 21. Jul 15 11:35:54.091160 kubelet[1918]: E0715 11:35:54.091127 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:35:54.569364 systemd[1]: Started sshd@21-10.0.0.101:22-10.0.0.1:54124.service. Jul 15 11:35:54.610043 sshd[3536]: Accepted publickey for core from 10.0.0.1 port 54124 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:54.611049 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:54.614590 systemd-logind[1195]: New session 22 of user core. Jul 15 11:35:54.615264 systemd[1]: Started session-22.scope. Jul 15 11:35:54.718771 sshd[3536]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:54.721500 systemd[1]: sshd@21-10.0.0.101:22-10.0.0.1:54124.service: Deactivated successfully. Jul 15 11:35:54.722040 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 11:35:54.722496 systemd-logind[1195]: Session 22 logged out. Waiting for processes to exit. Jul 15 11:35:54.723484 systemd[1]: Started sshd@22-10.0.0.101:22-10.0.0.1:54134.service. Jul 15 11:35:54.724229 systemd-logind[1195]: Removed session 22. Jul 15 11:35:54.763596 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 54134 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:54.764760 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:54.768546 systemd-logind[1195]: New session 23 of user core. Jul 15 11:35:54.769278 systemd[1]: Started session-23.scope. Jul 15 11:35:56.088375 env[1212]: time="2025-07-15T11:35:56.088325745Z" level=info msg="StopContainer for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" with timeout 30 (s)" Jul 15 11:35:56.089620 env[1212]: time="2025-07-15T11:35:56.089585836Z" level=info msg="Stop container \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" with signal terminated" Jul 15 11:35:56.103227 systemd[1]: cri-containerd-fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273.scope: Deactivated successfully. Jul 15 11:35:56.110356 env[1212]: time="2025-07-15T11:35:56.110288815Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 11:35:56.114836 env[1212]: time="2025-07-15T11:35:56.114800739Z" level=info msg="StopContainer for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" with timeout 2 (s)" Jul 15 11:35:56.115100 env[1212]: time="2025-07-15T11:35:56.115079733Z" level=info msg="Stop container \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" with signal terminated" Jul 15 11:35:56.118843 kubelet[1918]: E0715 11:35:56.118816 1918 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:35:56.119619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273-rootfs.mount: Deactivated successfully. Jul 15 11:35:56.123029 systemd-networkd[1033]: lxc_health: Link DOWN Jul 15 11:35:56.123038 systemd-networkd[1033]: lxc_health: Lost carrier Jul 15 11:35:56.127320 env[1212]: time="2025-07-15T11:35:56.127273699Z" level=info msg="shim disconnected" id=fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273 Jul 15 11:35:56.127320 env[1212]: time="2025-07-15T11:35:56.127315178Z" level=warning msg="cleaning up after shim disconnected" id=fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273 namespace=k8s.io Jul 15 11:35:56.127320 env[1212]: time="2025-07-15T11:35:56.127323885Z" level=info msg="cleaning up dead shim" Jul 15 11:35:56.133367 env[1212]: time="2025-07-15T11:35:56.133288029Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" Jul 15 11:35:56.136538 env[1212]: time="2025-07-15T11:35:56.136499866Z" level=info msg="StopContainer for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" returns successfully" Jul 15 11:35:56.137203 env[1212]: time="2025-07-15T11:35:56.137173795Z" level=info msg="StopPodSandbox for \"f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da\"" Jul 15 11:35:56.137323 env[1212]: time="2025-07-15T11:35:56.137300177Z" level=info msg="Container to stop \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:56.139053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da-shm.mount: Deactivated successfully. Jul 15 11:35:56.151879 systemd[1]: cri-containerd-f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da.scope: Deactivated successfully. Jul 15 11:35:56.157029 systemd[1]: cri-containerd-b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89.scope: Deactivated successfully. Jul 15 11:35:56.157530 systemd[1]: cri-containerd-b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89.scope: Consumed 6.235s CPU time. Jul 15 11:35:56.174366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da-rootfs.mount: Deactivated successfully. Jul 15 11:35:56.178766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89-rootfs.mount: Deactivated successfully. Jul 15 11:35:56.181499 env[1212]: time="2025-07-15T11:35:56.181441420Z" level=info msg="shim disconnected" id=f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da Jul 15 11:35:56.181499 env[1212]: time="2025-07-15T11:35:56.181495052Z" level=warning msg="cleaning up after shim disconnected" id=f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da namespace=k8s.io Jul 15 11:35:56.181499 env[1212]: time="2025-07-15T11:35:56.181504590Z" level=info msg="cleaning up dead shim" Jul 15 11:35:56.181927 env[1212]: time="2025-07-15T11:35:56.181898745Z" level=info msg="shim disconnected" id=b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89 Jul 15 11:35:56.181969 env[1212]: time="2025-07-15T11:35:56.181926148Z" level=warning msg="cleaning up after shim disconnected" id=b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89 namespace=k8s.io Jul 15 11:35:56.181969 env[1212]: time="2025-07-15T11:35:56.181934353Z" level=info msg="cleaning up dead shim" Jul 15 11:35:56.187894 env[1212]: time="2025-07-15T11:35:56.187838442Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3649 runtime=io.containerd.runc.v2\n" Jul 15 11:35:56.190114 env[1212]: time="2025-07-15T11:35:56.190009707Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3648 runtime=io.containerd.runc.v2\n" Jul 15 11:35:56.190165 env[1212]: time="2025-07-15T11:35:56.190115439Z" level=info msg="StopContainer for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" returns successfully" Jul 15 11:35:56.190627 env[1212]: time="2025-07-15T11:35:56.190587553Z" level=info msg="TearDown network for sandbox \"f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da\" successfully" Jul 15 11:35:56.190702 env[1212]: time="2025-07-15T11:35:56.190630214Z" level=info msg="StopPodSandbox for \"f74143e37650a2fd04a6d824348b1d6de378b2a0d5f6792a90aedd8bb1a333da\" returns successfully" Jul 15 11:35:56.190774 env[1212]: time="2025-07-15T11:35:56.190741266Z" level=info msg="StopPodSandbox for \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\"" Jul 15 11:35:56.190892 env[1212]: time="2025-07-15T11:35:56.190870514Z" level=info msg="Container to stop \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:56.191004 env[1212]: time="2025-07-15T11:35:56.190975365Z" level=info msg="Container to stop \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:56.191084 env[1212]: time="2025-07-15T11:35:56.191063824Z" level=info msg="Container to stop \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:56.191184 env[1212]: time="2025-07-15T11:35:56.191142755Z" level=info msg="Container to stop \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:56.191269 env[1212]: time="2025-07-15T11:35:56.191246554Z" level=info msg="Container to stop \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 11:35:56.196543 systemd[1]: cri-containerd-7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408.scope: Deactivated successfully. Jul 15 11:35:56.219622 env[1212]: time="2025-07-15T11:35:56.219575015Z" level=info msg="shim disconnected" id=7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408 Jul 15 11:35:56.219622 env[1212]: time="2025-07-15T11:35:56.219621475Z" level=warning msg="cleaning up after shim disconnected" id=7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408 namespace=k8s.io Jul 15 11:35:56.219622 env[1212]: time="2025-07-15T11:35:56.219629510Z" level=info msg="cleaning up dead shim" Jul 15 11:35:56.225925 env[1212]: time="2025-07-15T11:35:56.225887185Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:35:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3692 runtime=io.containerd.runc.v2\n" Jul 15 11:35:56.226464 env[1212]: time="2025-07-15T11:35:56.226439532Z" level=info msg="TearDown network for sandbox \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" successfully" Jul 15 11:35:56.226464 env[1212]: time="2025-07-15T11:35:56.226461614Z" level=info msg="StopPodSandbox for \"7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408\" returns successfully" Jul 15 11:35:56.229317 kubelet[1918]: I0715 11:35:56.228895 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d4fe80c-b571-4073-887c-590d0f7be1d0-cilium-config-path\") pod \"1d4fe80c-b571-4073-887c-590d0f7be1d0\" (UID: \"1d4fe80c-b571-4073-887c-590d0f7be1d0\") " Jul 15 11:35:56.229317 kubelet[1918]: I0715 11:35:56.228930 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hlgv\" (UniqueName: \"kubernetes.io/projected/1d4fe80c-b571-4073-887c-590d0f7be1d0-kube-api-access-7hlgv\") pod \"1d4fe80c-b571-4073-887c-590d0f7be1d0\" (UID: \"1d4fe80c-b571-4073-887c-590d0f7be1d0\") " Jul 15 11:35:56.230705 kubelet[1918]: I0715 11:35:56.230660 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4fe80c-b571-4073-887c-590d0f7be1d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d4fe80c-b571-4073-887c-590d0f7be1d0" (UID: "1d4fe80c-b571-4073-887c-590d0f7be1d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:35:56.233808 kubelet[1918]: I0715 11:35:56.233772 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4fe80c-b571-4073-887c-590d0f7be1d0-kube-api-access-7hlgv" (OuterVolumeSpecName: "kube-api-access-7hlgv") pod "1d4fe80c-b571-4073-887c-590d0f7be1d0" (UID: "1d4fe80c-b571-4073-887c-590d0f7be1d0"). InnerVolumeSpecName "kube-api-access-7hlgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:35:56.329563 kubelet[1918]: I0715 11:35:56.329501 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-config-path\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329563 kubelet[1918]: I0715 11:35:56.329548 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-etc-cni-netd\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329563 kubelet[1918]: I0715 11:35:56.329564 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-kernel\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329563 kubelet[1918]: I0715 11:35:56.329582 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-clustermesh-secrets\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329843 kubelet[1918]: I0715 11:35:56.329596 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-lib-modules\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329843 kubelet[1918]: I0715 11:35:56.329611 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cni-path\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329843 kubelet[1918]: I0715 11:35:56.329625 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hostproc\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329843 kubelet[1918]: I0715 11:35:56.329637 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-cgroup\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329843 kubelet[1918]: I0715 11:35:56.329649 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-run\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329843 kubelet[1918]: I0715 11:35:56.329661 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-xtables-lock\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329992 kubelet[1918]: I0715 11:35:56.329656 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.329992 kubelet[1918]: I0715 11:35:56.329696 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xdtl\" (UniqueName: \"kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-kube-api-access-6xdtl\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329992 kubelet[1918]: I0715 11:35:56.329711 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-bpf-maps\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.329992 kubelet[1918]: I0715 11:35:56.329717 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.329992 kubelet[1918]: I0715 11:35:56.329723 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-net\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329733 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329738 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hubble-tls\") pod \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\" (UID: \"5b503fe9-4981-42e6-8af1-5bb6d5d11ce2\") " Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329765 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329775 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d4fe80c-b571-4073-887c-590d0f7be1d0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329784 1918 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hlgv\" (UniqueName: \"kubernetes.io/projected/1d4fe80c-b571-4073-887c-590d0f7be1d0-kube-api-access-7hlgv\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329792 1918 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.330118 kubelet[1918]: I0715 11:35:56.329798 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.331710 kubelet[1918]: I0715 11:35:56.330337 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.331710 kubelet[1918]: I0715 11:35:56.330364 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.331710 kubelet[1918]: I0715 11:35:56.330380 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cni-path" (OuterVolumeSpecName: "cni-path") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.331710 kubelet[1918]: I0715 11:35:56.330405 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.331710 kubelet[1918]: I0715 11:35:56.330418 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.331986 kubelet[1918]: I0715 11:35:56.330432 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hostproc" (OuterVolumeSpecName: "hostproc") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.331986 kubelet[1918]: I0715 11:35:56.330447 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:56.332046 kubelet[1918]: I0715 11:35:56.332023 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:35:56.333029 kubelet[1918]: I0715 11:35:56.332981 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:35:56.333123 kubelet[1918]: I0715 11:35:56.333086 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-kube-api-access-6xdtl" (OuterVolumeSpecName: "kube-api-access-6xdtl") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "kube-api-access-6xdtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:35:56.333305 kubelet[1918]: I0715 11:35:56.333281 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" (UID: "5b503fe9-4981-42e6-8af1-5bb6d5d11ce2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:35:56.430765 kubelet[1918]: I0715 11:35:56.430707 1918 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430765 kubelet[1918]: I0715 11:35:56.430751 1918 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430765 kubelet[1918]: I0715 11:35:56.430764 1918 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430765 kubelet[1918]: I0715 11:35:56.430776 1918 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430786 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430797 1918 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430810 1918 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xdtl\" (UniqueName: \"kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-kube-api-access-6xdtl\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430820 1918 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430830 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430840 1918 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.430982 kubelet[1918]: I0715 11:35:56.430849 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:56.610906 kubelet[1918]: I0715 11:35:56.610870 1918 scope.go:117] "RemoveContainer" containerID="b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89" Jul 15 11:35:56.612714 env[1212]: time="2025-07-15T11:35:56.612658188Z" level=info msg="RemoveContainer for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\"" Jul 15 11:35:56.614171 systemd[1]: Removed slice kubepods-burstable-pod5b503fe9_4981_42e6_8af1_5bb6d5d11ce2.slice. Jul 15 11:35:56.614248 systemd[1]: kubepods-burstable-pod5b503fe9_4981_42e6_8af1_5bb6d5d11ce2.slice: Consumed 6.327s CPU time. Jul 15 11:35:56.616110 systemd[1]: Removed slice kubepods-besteffort-pod1d4fe80c_b571_4073_887c_590d0f7be1d0.slice. Jul 15 11:35:56.616334 env[1212]: time="2025-07-15T11:35:56.616303103Z" level=info msg="RemoveContainer for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" returns successfully" Jul 15 11:35:56.616510 kubelet[1918]: I0715 11:35:56.616488 1918 scope.go:117] "RemoveContainer" containerID="107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112" Jul 15 11:35:56.617490 env[1212]: time="2025-07-15T11:35:56.617423217Z" level=info msg="RemoveContainer for \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\"" Jul 15 11:35:56.620859 env[1212]: time="2025-07-15T11:35:56.620811691Z" level=info msg="RemoveContainer for \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\" returns successfully" Jul 15 11:35:56.621040 kubelet[1918]: I0715 11:35:56.621017 1918 scope.go:117] "RemoveContainer" containerID="1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e" Jul 15 11:35:56.622082 env[1212]: time="2025-07-15T11:35:56.622046654Z" level=info msg="RemoveContainer for \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\"" Jul 15 11:35:56.625146 env[1212]: time="2025-07-15T11:35:56.625110107Z" level=info msg="RemoveContainer for \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\" returns successfully" Jul 15 11:35:56.626765 kubelet[1918]: I0715 11:35:56.626744 1918 scope.go:117] "RemoveContainer" containerID="144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46" Jul 15 11:35:56.627994 env[1212]: time="2025-07-15T11:35:56.627951234Z" level=info msg="RemoveContainer for \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\"" Jul 15 11:35:56.631506 env[1212]: time="2025-07-15T11:35:56.630909184Z" level=info msg="RemoveContainer for \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\" returns successfully" Jul 15 11:35:56.631574 kubelet[1918]: I0715 11:35:56.631036 1918 scope.go:117] "RemoveContainer" containerID="150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1" Jul 15 11:35:56.632337 env[1212]: time="2025-07-15T11:35:56.632300917Z" level=info msg="RemoveContainer for \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\"" Jul 15 11:35:56.636518 env[1212]: time="2025-07-15T11:35:56.636491588Z" level=info msg="RemoveContainer for \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\" returns successfully" Jul 15 11:35:56.636764 kubelet[1918]: I0715 11:35:56.636737 1918 scope.go:117] "RemoveContainer" containerID="b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89" Jul 15 11:35:56.637156 env[1212]: time="2025-07-15T11:35:56.637053653Z" level=error msg="ContainerStatus for \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\": not found" Jul 15 11:35:56.637316 kubelet[1918]: E0715 11:35:56.637288 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\": not found" containerID="b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89" Jul 15 11:35:56.637448 kubelet[1918]: I0715 11:35:56.637323 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89"} err="failed to get container status \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9bda8ceb89bcda686f892b46feee205045ac5624c2732205ab7883449603b89\": not found" Jul 15 11:35:56.637448 kubelet[1918]: I0715 11:35:56.637445 1918 scope.go:117] "RemoveContainer" containerID="107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112" Jul 15 11:35:56.637665 env[1212]: time="2025-07-15T11:35:56.637609507Z" level=error msg="ContainerStatus for \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\": not found" Jul 15 11:35:56.637810 kubelet[1918]: E0715 11:35:56.637763 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\": not found" containerID="107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112" Jul 15 11:35:56.637810 kubelet[1918]: I0715 11:35:56.637788 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112"} err="failed to get container status \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\": rpc error: code = NotFound desc = an error occurred when try to find container \"107aabdf5959e56410cf50acfe83470277a2b7e7a299b14f596fc09b22a27112\": not found" Jul 15 11:35:56.637810 kubelet[1918]: I0715 11:35:56.637803 1918 scope.go:117] "RemoveContainer" containerID="1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e" Jul 15 11:35:56.638147 env[1212]: time="2025-07-15T11:35:56.638096920Z" level=error msg="ContainerStatus for \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\": not found" Jul 15 11:35:56.638338 kubelet[1918]: E0715 11:35:56.638295 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\": not found" containerID="1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e" Jul 15 11:35:56.638400 kubelet[1918]: I0715 11:35:56.638346 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e"} err="failed to get container status \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ef3e8f7b946f8e177bd6e0cb752539366704a36298068680bb2e9b7a024f47e\": not found" Jul 15 11:35:56.638400 kubelet[1918]: I0715 11:35:56.638371 1918 scope.go:117] "RemoveContainer" containerID="144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46" Jul 15 11:35:56.638620 env[1212]: time="2025-07-15T11:35:56.638576748Z" level=error msg="ContainerStatus for \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\": not found" Jul 15 11:35:56.638824 kubelet[1918]: E0715 11:35:56.638788 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\": not found" containerID="144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46" Jul 15 11:35:56.638988 kubelet[1918]: I0715 11:35:56.638828 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46"} err="failed to get container status \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\": rpc error: code = NotFound desc = an error occurred when try to find container \"144cf6f62529185f9715b63bccf305ea5f168ae44768b8a9a1d91aed72188f46\": not found" Jul 15 11:35:56.638988 kubelet[1918]: I0715 11:35:56.638856 1918 scope.go:117] "RemoveContainer" containerID="150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1" Jul 15 11:35:56.639125 env[1212]: time="2025-07-15T11:35:56.639059702Z" level=error msg="ContainerStatus for \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\": not found" Jul 15 11:35:56.639225 kubelet[1918]: E0715 11:35:56.639207 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\": not found" containerID="150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1" Jul 15 11:35:56.639259 kubelet[1918]: I0715 11:35:56.639233 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1"} err="failed to get container status \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"150fd25c4cd44a06c3c090cc575b130361e85ba0b26a6804daf5e474110e97c1\": not found" Jul 15 11:35:56.639259 kubelet[1918]: I0715 11:35:56.639249 1918 scope.go:117] "RemoveContainer" containerID="fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273" Jul 15 11:35:56.640249 env[1212]: time="2025-07-15T11:35:56.640216706Z" level=info msg="RemoveContainer for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\"" Jul 15 11:35:56.643349 env[1212]: time="2025-07-15T11:35:56.643309966Z" level=info msg="RemoveContainer for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" returns successfully" Jul 15 11:35:56.643463 kubelet[1918]: I0715 11:35:56.643435 1918 scope.go:117] "RemoveContainer" containerID="fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273" Jul 15 11:35:56.643787 env[1212]: time="2025-07-15T11:35:56.643711945Z" level=error msg="ContainerStatus for \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\": not found" Jul 15 11:35:56.643920 kubelet[1918]: E0715 11:35:56.643893 1918 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\": not found" containerID="fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273" Jul 15 11:35:56.643970 kubelet[1918]: I0715 11:35:56.643923 1918 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273"} err="failed to get container status \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc11ab288a6f5fd3b6450a9c6fd9992df8634bb89719b6a43d7a759df465a273\": not found" Jul 15 11:35:57.095865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408-rootfs.mount: Deactivated successfully. Jul 15 11:35:57.095991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7de403ca0f9dd5d8cb7129a1cca9d14584105cb4efd103c000bb9f2a78b55408-shm.mount: Deactivated successfully. Jul 15 11:35:57.096093 systemd[1]: var-lib-kubelet-pods-5b503fe9\x2d4981\x2d42e6\x2d8af1\x2d5bb6d5d11ce2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6xdtl.mount: Deactivated successfully. Jul 15 11:35:57.096174 systemd[1]: var-lib-kubelet-pods-1d4fe80c\x2db571\x2d4073\x2d887c\x2d590d0f7be1d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7hlgv.mount: Deactivated successfully. Jul 15 11:35:57.096283 systemd[1]: var-lib-kubelet-pods-5b503fe9\x2d4981\x2d42e6\x2d8af1\x2d5bb6d5d11ce2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:35:57.096368 systemd[1]: var-lib-kubelet-pods-5b503fe9\x2d4981\x2d42e6\x2d8af1\x2d5bb6d5d11ce2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:35:58.057450 sshd[3549]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:58.059789 systemd[1]: sshd@22-10.0.0.101:22-10.0.0.1:54134.service: Deactivated successfully. Jul 15 11:35:58.060282 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 11:35:58.060797 systemd-logind[1195]: Session 23 logged out. Waiting for processes to exit. Jul 15 11:35:58.062113 systemd[1]: Started sshd@23-10.0.0.101:22-10.0.0.1:54140.service. Jul 15 11:35:58.062724 systemd-logind[1195]: Removed session 23. Jul 15 11:35:58.092987 kubelet[1918]: I0715 11:35:58.092938 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4fe80c-b571-4073-887c-590d0f7be1d0" path="/var/lib/kubelet/pods/1d4fe80c-b571-4073-887c-590d0f7be1d0/volumes" Jul 15 11:35:58.093314 kubelet[1918]: I0715 11:35:58.093297 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" path="/var/lib/kubelet/pods/5b503fe9-4981-42e6-8af1-5bb6d5d11ce2/volumes" Jul 15 11:35:58.103840 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 54140 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:58.105080 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:58.108324 systemd-logind[1195]: New session 24 of user core. Jul 15 11:35:58.109068 systemd[1]: Started session-24.scope. Jul 15 11:35:58.183180 kubelet[1918]: I0715 11:35:58.183118 1918 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T11:35:58Z","lastTransitionTime":"2025-07-15T11:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 11:35:58.754499 sshd[3710]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:58.759014 systemd[1]: Started sshd@24-10.0.0.101:22-10.0.0.1:54152.service. Jul 15 11:35:58.761948 systemd-logind[1195]: Session 24 logged out. Waiting for processes to exit. Jul 15 11:35:58.762671 systemd[1]: sshd@23-10.0.0.101:22-10.0.0.1:54140.service: Deactivated successfully. Jul 15 11:35:58.763440 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 11:35:58.764971 systemd-logind[1195]: Removed session 24. Jul 15 11:35:58.767392 kubelet[1918]: E0715 11:35:58.767350 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" containerName="clean-cilium-state" Jul 15 11:35:58.767392 kubelet[1918]: E0715 11:35:58.767380 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" containerName="apply-sysctl-overwrites" Jul 15 11:35:58.767392 kubelet[1918]: E0715 11:35:58.767386 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" containerName="mount-cgroup" Jul 15 11:35:58.767392 kubelet[1918]: E0715 11:35:58.767392 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" containerName="mount-bpf-fs" Jul 15 11:35:58.767392 kubelet[1918]: E0715 11:35:58.767397 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" containerName="cilium-agent" Jul 15 11:35:58.767640 kubelet[1918]: E0715 11:35:58.767403 1918 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d4fe80c-b571-4073-887c-590d0f7be1d0" containerName="cilium-operator" Jul 15 11:35:58.767640 kubelet[1918]: I0715 11:35:58.767424 1918 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4fe80c-b571-4073-887c-590d0f7be1d0" containerName="cilium-operator" Jul 15 11:35:58.767640 kubelet[1918]: I0715 11:35:58.767429 1918 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b503fe9-4981-42e6-8af1-5bb6d5d11ce2" containerName="cilium-agent" Jul 15 11:35:58.773671 systemd[1]: Created slice kubepods-burstable-pod5325396e_6690_4dd3_90ab_60ef296fd141.slice. Jul 15 11:35:58.800910 sshd[3722]: Accepted publickey for core from 10.0.0.1 port 54152 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:58.801658 sshd[3722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:58.805207 systemd-logind[1195]: New session 25 of user core. Jul 15 11:35:58.805890 systemd[1]: Started session-25.scope. Jul 15 11:35:58.842838 kubelet[1918]: I0715 11:35:58.842799 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-run\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.842838 kubelet[1918]: I0715 11:35:58.842835 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-kernel\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.842961 kubelet[1918]: I0715 11:35:58.842855 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-xtables-lock\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.842961 kubelet[1918]: I0715 11:35:58.842869 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-hostproc\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.842961 kubelet[1918]: I0715 11:35:58.842883 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-lib-modules\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.842961 kubelet[1918]: I0715 11:35:58.842898 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-ipsec-secrets\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.842961 kubelet[1918]: I0715 11:35:58.842911 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-cgroup\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843083 kubelet[1918]: I0715 11:35:58.842959 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cni-path\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843083 kubelet[1918]: I0715 11:35:58.842992 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-etc-cni-netd\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843083 kubelet[1918]: I0715 11:35:58.843005 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-net\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843083 kubelet[1918]: I0715 11:35:58.843017 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-hubble-tls\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843083 kubelet[1918]: I0715 11:35:58.843041 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-bpf-maps\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843190 kubelet[1918]: I0715 11:35:58.843091 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-config-path\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843190 kubelet[1918]: I0715 11:35:58.843145 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-clustermesh-secrets\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.843234 kubelet[1918]: I0715 11:35:58.843214 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdltg\" (UniqueName: \"kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-kube-api-access-zdltg\") pod \"cilium-jbkm6\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " pod="kube-system/cilium-jbkm6" Jul 15 11:35:58.920388 sshd[3722]: pam_unix(sshd:session): session closed for user core Jul 15 11:35:58.923254 systemd[1]: sshd@24-10.0.0.101:22-10.0.0.1:54152.service: Deactivated successfully. Jul 15 11:35:58.923766 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 11:35:58.924408 systemd-logind[1195]: Session 25 logged out. Waiting for processes to exit. Jul 15 11:35:58.925608 systemd[1]: Started sshd@25-10.0.0.101:22-10.0.0.1:54166.service. Jul 15 11:35:58.926578 systemd-logind[1195]: Removed session 25. Jul 15 11:35:58.934515 kubelet[1918]: E0715 11:35:58.934464 1918 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zdltg lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-jbkm6" podUID="5325396e-6690-4dd3-90ab-60ef296fd141" Jul 15 11:35:58.972410 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 54166 ssh2: RSA SHA256:HJCyX8JAQ9OMquuEIVT6BTeEdgkyUqyqBnxnhtHUsbo Jul 15 11:35:58.973524 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 15 11:35:58.976615 systemd-logind[1195]: New session 26 of user core. Jul 15 11:35:58.977463 systemd[1]: Started session-26.scope. Jul 15 11:35:59.647703 kubelet[1918]: I0715 11:35:59.647642 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-net\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.647703 kubelet[1918]: I0715 11:35:59.647675 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-lib-modules\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648052 kubelet[1918]: I0715 11:35:59.647718 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-etc-cni-netd\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648052 kubelet[1918]: I0715 11:35:59.647732 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-bpf-maps\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648052 kubelet[1918]: I0715 11:35:59.647745 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-run\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648052 kubelet[1918]: I0715 11:35:59.647752 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648052 kubelet[1918]: I0715 11:35:59.647750 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648173 kubelet[1918]: I0715 11:35:59.647787 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648173 kubelet[1918]: I0715 11:35:59.647796 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648173 kubelet[1918]: I0715 11:35:59.647800 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648173 kubelet[1918]: I0715 11:35:59.647758 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-xtables-lock\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648173 kubelet[1918]: I0715 11:35:59.647834 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cni-path\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648280 kubelet[1918]: I0715 11:35:59.647849 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-hostproc\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648280 kubelet[1918]: I0715 11:35:59.647870 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-clustermesh-secrets\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648280 kubelet[1918]: I0715 11:35:59.647878 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cni-path" (OuterVolumeSpecName: "cni-path") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648280 kubelet[1918]: I0715 11:35:59.647882 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-hostproc" (OuterVolumeSpecName: "hostproc") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648280 kubelet[1918]: I0715 11:35:59.647888 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-kernel\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648397 kubelet[1918]: I0715 11:35:59.647900 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648397 kubelet[1918]: I0715 11:35:59.647909 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-config-path\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648397 kubelet[1918]: I0715 11:35:59.647915 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.648397 kubelet[1918]: I0715 11:35:59.647929 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-ipsec-secrets\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648397 kubelet[1918]: I0715 11:35:59.647951 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdltg\" (UniqueName: \"kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-kube-api-access-zdltg\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.647968 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-cgroup\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.647984 1918 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-hubble-tls\") pod \"5325396e-6690-4dd3-90ab-60ef296fd141\" (UID: \"5325396e-6690-4dd3-90ab-60ef296fd141\") " Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.648013 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.648022 1918 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.648041 1918 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.648048 1918 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.648056 1918 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648508 kubelet[1918]: I0715 11:35:59.648064 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648676 kubelet[1918]: I0715 11:35:59.648071 1918 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648676 kubelet[1918]: I0715 11:35:59.648079 1918 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.648676 kubelet[1918]: I0715 11:35:59.648086 1918 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.649529 kubelet[1918]: I0715 11:35:59.649500 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 11:35:59.650972 kubelet[1918]: I0715 11:35:59.650938 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-kube-api-access-zdltg" (OuterVolumeSpecName: "kube-api-access-zdltg") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "kube-api-access-zdltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:35:59.651029 kubelet[1918]: I0715 11:35:59.650977 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 11:35:59.651731 systemd[1]: var-lib-kubelet-pods-5325396e\x2d6690\x2d4dd3\x2d90ab\x2d60ef296fd141-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzdltg.mount: Deactivated successfully. Jul 15 11:35:59.651821 systemd[1]: var-lib-kubelet-pods-5325396e\x2d6690\x2d4dd3\x2d90ab\x2d60ef296fd141-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 11:35:59.652490 kubelet[1918]: I0715 11:35:59.652246 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:35:59.652490 kubelet[1918]: I0715 11:35:59.652320 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 11:35:59.652922 kubelet[1918]: I0715 11:35:59.652884 1918 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5325396e-6690-4dd3-90ab-60ef296fd141" (UID: "5325396e-6690-4dd3-90ab-60ef296fd141"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 11:35:59.748381 kubelet[1918]: I0715 11:35:59.748341 1918 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.748381 kubelet[1918]: I0715 11:35:59.748370 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.748381 kubelet[1918]: I0715 11:35:59.748378 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.748381 kubelet[1918]: I0715 11:35:59.748385 1918 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdltg\" (UniqueName: \"kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-kube-api-access-zdltg\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.748381 kubelet[1918]: I0715 11:35:59.748394 1918 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5325396e-6690-4dd3-90ab-60ef296fd141-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.748639 kubelet[1918]: I0715 11:35:59.748402 1918 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5325396e-6690-4dd3-90ab-60ef296fd141-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 11:35:59.948025 systemd[1]: var-lib-kubelet-pods-5325396e\x2d6690\x2d4dd3\x2d90ab\x2d60ef296fd141-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 11:35:59.948126 systemd[1]: var-lib-kubelet-pods-5325396e\x2d6690\x2d4dd3\x2d90ab\x2d60ef296fd141-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 15 11:36:00.095695 systemd[1]: Removed slice kubepods-burstable-pod5325396e_6690_4dd3_90ab_60ef296fd141.slice. Jul 15 11:36:00.650718 systemd[1]: Created slice kubepods-burstable-podd5ffbec0_8805_4627_8657_3e8c552864cb.slice. Jul 15 11:36:00.653414 kubelet[1918]: I0715 11:36:00.653382 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-etc-cni-netd\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653414 kubelet[1918]: I0715 11:36:00.653413 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5ffbec0-8805-4627-8657-3e8c552864cb-cilium-config-path\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653676 kubelet[1918]: I0715 11:36:00.653426 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-host-proc-sys-kernel\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653676 kubelet[1918]: I0715 11:36:00.653439 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-cni-path\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653676 kubelet[1918]: I0715 11:36:00.653451 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5ffbec0-8805-4627-8657-3e8c552864cb-clustermesh-secrets\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653676 kubelet[1918]: I0715 11:36:00.653465 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-cilium-run\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653676 kubelet[1918]: I0715 11:36:00.653478 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-bpf-maps\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653676 kubelet[1918]: I0715 11:36:00.653489 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-xtables-lock\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653838 kubelet[1918]: I0715 11:36:00.653502 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d5ffbec0-8805-4627-8657-3e8c552864cb-cilium-ipsec-secrets\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653838 kubelet[1918]: I0715 11:36:00.653517 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-host-proc-sys-net\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653838 kubelet[1918]: I0715 11:36:00.653531 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5ffbec0-8805-4627-8657-3e8c552864cb-hubble-tls\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653838 kubelet[1918]: I0715 11:36:00.653543 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxll\" (UniqueName: \"kubernetes.io/projected/d5ffbec0-8805-4627-8657-3e8c552864cb-kube-api-access-vcxll\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653838 kubelet[1918]: I0715 11:36:00.653557 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-hostproc\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653838 kubelet[1918]: I0715 11:36:00.653568 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-lib-modules\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.653971 kubelet[1918]: I0715 11:36:00.653580 1918 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5ffbec0-8805-4627-8657-3e8c552864cb-cilium-cgroup\") pod \"cilium-4pkd7\" (UID: \"d5ffbec0-8805-4627-8657-3e8c552864cb\") " pod="kube-system/cilium-4pkd7" Jul 15 11:36:00.953126 kubelet[1918]: E0715 11:36:00.952864 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:00.953470 env[1212]: time="2025-07-15T11:36:00.953377805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pkd7,Uid:d5ffbec0-8805-4627-8657-3e8c552864cb,Namespace:kube-system,Attempt:0,}" Jul 15 11:36:00.965820 env[1212]: time="2025-07-15T11:36:00.965741508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 15 11:36:00.965820 env[1212]: time="2025-07-15T11:36:00.965789009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 15 11:36:00.965820 env[1212]: time="2025-07-15T11:36:00.965799369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 15 11:36:00.966169 env[1212]: time="2025-07-15T11:36:00.966083842Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b pid=3765 runtime=io.containerd.runc.v2 Jul 15 11:36:00.976574 systemd[1]: Started cri-containerd-0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b.scope. Jul 15 11:36:00.992777 env[1212]: time="2025-07-15T11:36:00.992409261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pkd7,Uid:d5ffbec0-8805-4627-8657-3e8c552864cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\"" Jul 15 11:36:00.992942 kubelet[1918]: E0715 11:36:00.992919 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:00.995924 env[1212]: time="2025-07-15T11:36:00.995880993Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 11:36:01.012730 env[1212]: time="2025-07-15T11:36:01.012661321Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627\"" Jul 15 11:36:01.013351 env[1212]: time="2025-07-15T11:36:01.013320900Z" level=info msg="StartContainer for \"9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627\"" Jul 15 11:36:01.026111 systemd[1]: Started cri-containerd-9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627.scope. Jul 15 11:36:01.047483 env[1212]: time="2025-07-15T11:36:01.046396513Z" level=info msg="StartContainer for \"9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627\" returns successfully" Jul 15 11:36:01.051331 systemd[1]: cri-containerd-9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627.scope: Deactivated successfully. Jul 15 11:36:01.082996 env[1212]: time="2025-07-15T11:36:01.082829916Z" level=info msg="shim disconnected" id=9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627 Jul 15 11:36:01.082996 env[1212]: time="2025-07-15T11:36:01.082889370Z" level=warning msg="cleaning up after shim disconnected" id=9083f90c3b1326b9d1a1afaa0b192ec2bde5c74aff6ff49bbc17060bc483d627 namespace=k8s.io Jul 15 11:36:01.082996 env[1212]: time="2025-07-15T11:36:01.082898677Z" level=info msg="cleaning up dead shim" Jul 15 11:36:01.092181 env[1212]: time="2025-07-15T11:36:01.092117507Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:36:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3848 runtime=io.containerd.runc.v2\n" Jul 15 11:36:01.119438 kubelet[1918]: E0715 11:36:01.119401 1918 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 11:36:01.622866 kubelet[1918]: E0715 11:36:01.622836 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:01.624389 env[1212]: time="2025-07-15T11:36:01.624335425Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 11:36:01.636328 env[1212]: time="2025-07-15T11:36:01.636271772Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640\"" Jul 15 11:36:01.636826 env[1212]: time="2025-07-15T11:36:01.636739986Z" level=info msg="StartContainer for \"0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640\"" Jul 15 11:36:01.651833 systemd[1]: Started cri-containerd-0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640.scope. Jul 15 11:36:01.683848 env[1212]: time="2025-07-15T11:36:01.683785878Z" level=info msg="StartContainer for \"0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640\" returns successfully" Jul 15 11:36:01.686993 systemd[1]: cri-containerd-0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640.scope: Deactivated successfully. Jul 15 11:36:01.706116 env[1212]: time="2025-07-15T11:36:01.706064329Z" level=info msg="shim disconnected" id=0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640 Jul 15 11:36:01.706116 env[1212]: time="2025-07-15T11:36:01.706108183Z" level=warning msg="cleaning up after shim disconnected" id=0f8dafbc980015fc17384a407826426c9cc76846b52110d0f7313d170e8d0640 namespace=k8s.io Jul 15 11:36:01.706116 env[1212]: time="2025-07-15T11:36:01.706116238Z" level=info msg="cleaning up dead shim" Jul 15 11:36:01.712086 env[1212]: time="2025-07-15T11:36:01.712029257Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:36:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3911 runtime=io.containerd.runc.v2\n" Jul 15 11:36:02.093135 kubelet[1918]: I0715 11:36:02.092999 1918 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5325396e-6690-4dd3-90ab-60ef296fd141" path="/var/lib/kubelet/pods/5325396e-6690-4dd3-90ab-60ef296fd141/volumes" Jul 15 11:36:02.626926 kubelet[1918]: E0715 11:36:02.626892 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:02.629139 env[1212]: time="2025-07-15T11:36:02.629090070Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 11:36:02.655866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount462842168.mount: Deactivated successfully. Jul 15 11:36:02.662333 env[1212]: time="2025-07-15T11:36:02.662271978Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309\"" Jul 15 11:36:02.662807 env[1212]: time="2025-07-15T11:36:02.662782252Z" level=info msg="StartContainer for \"b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309\"" Jul 15 11:36:02.679796 systemd[1]: Started cri-containerd-b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309.scope. Jul 15 11:36:02.702169 env[1212]: time="2025-07-15T11:36:02.702128231Z" level=info msg="StartContainer for \"b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309\" returns successfully" Jul 15 11:36:02.703588 systemd[1]: cri-containerd-b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309.scope: Deactivated successfully. Jul 15 11:36:02.725164 env[1212]: time="2025-07-15T11:36:02.725099484Z" level=info msg="shim disconnected" id=b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309 Jul 15 11:36:02.725164 env[1212]: time="2025-07-15T11:36:02.725153408Z" level=warning msg="cleaning up after shim disconnected" id=b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309 namespace=k8s.io Jul 15 11:36:02.725164 env[1212]: time="2025-07-15T11:36:02.725162395Z" level=info msg="cleaning up dead shim" Jul 15 11:36:02.731409 env[1212]: time="2025-07-15T11:36:02.731363278Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:36:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3967 runtime=io.containerd.runc.v2\n" Jul 15 11:36:02.962260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0c078aae878372873d8098ec83ff8cc3243e8f7bfeb929d2ef5b8a5ba2ae309-rootfs.mount: Deactivated successfully. Jul 15 11:36:03.630458 kubelet[1918]: E0715 11:36:03.630424 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:03.631989 env[1212]: time="2025-07-15T11:36:03.631916289Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 11:36:03.645091 env[1212]: time="2025-07-15T11:36:03.645018740Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e\"" Jul 15 11:36:03.646540 env[1212]: time="2025-07-15T11:36:03.646428259Z" level=info msg="StartContainer for \"81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e\"" Jul 15 11:36:03.665671 systemd[1]: Started cri-containerd-81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e.scope. Jul 15 11:36:03.692189 env[1212]: time="2025-07-15T11:36:03.692145690Z" level=info msg="StartContainer for \"81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e\" returns successfully" Jul 15 11:36:03.692390 systemd[1]: cri-containerd-81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e.scope: Deactivated successfully. Jul 15 11:36:03.712310 env[1212]: time="2025-07-15T11:36:03.712247983Z" level=info msg="shim disconnected" id=81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e Jul 15 11:36:03.712310 env[1212]: time="2025-07-15T11:36:03.712304170Z" level=warning msg="cleaning up after shim disconnected" id=81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e namespace=k8s.io Jul 15 11:36:03.712310 env[1212]: time="2025-07-15T11:36:03.712313508Z" level=info msg="cleaning up dead shim" Jul 15 11:36:03.719919 env[1212]: time="2025-07-15T11:36:03.719875471Z" level=warning msg="cleanup warnings time=\"2025-07-15T11:36:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4021 runtime=io.containerd.runc.v2\n" Jul 15 11:36:03.962291 systemd[1]: run-containerd-runc-k8s.io-81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e-runc.470Klw.mount: Deactivated successfully. Jul 15 11:36:03.962381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81fda19679649d5bd3d506a3a7bf19d085161418c7fbb61e4c23997f7ad8185e-rootfs.mount: Deactivated successfully. Jul 15 11:36:04.090874 kubelet[1918]: E0715 11:36:04.090832 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:04.634372 kubelet[1918]: E0715 11:36:04.634341 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:04.636763 env[1212]: time="2025-07-15T11:36:04.635893465Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 11:36:04.649235 env[1212]: time="2025-07-15T11:36:04.649192400Z" level=info msg="CreateContainer within sandbox \"0381050f82fd0b3f67a656c1a8dcf8f248343799ee6d42b5c4debaab94ae3c6b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827\"" Jul 15 11:36:04.650068 env[1212]: time="2025-07-15T11:36:04.650012854Z" level=info msg="StartContainer for \"cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827\"" Jul 15 11:36:04.667053 systemd[1]: Started cri-containerd-cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827.scope. Jul 15 11:36:04.693940 env[1212]: time="2025-07-15T11:36:04.693883057Z" level=info msg="StartContainer for \"cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827\" returns successfully" Jul 15 11:36:04.928720 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 15 11:36:04.962321 systemd[1]: run-containerd-runc-k8s.io-cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827-runc.Sn9w9l.mount: Deactivated successfully. Jul 15 11:36:05.638727 kubelet[1918]: E0715 11:36:05.638695 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:05.651329 kubelet[1918]: I0715 11:36:05.651282 1918 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4pkd7" podStartSLOduration=5.651265935 podStartE2EDuration="5.651265935s" podCreationTimestamp="2025-07-15 11:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 11:36:05.651040866 +0000 UTC m=+89.645291484" watchObservedRunningTime="2025-07-15 11:36:05.651265935 +0000 UTC m=+89.645516553" Jul 15 11:36:06.954246 kubelet[1918]: E0715 11:36:06.954214 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:07.411791 systemd-networkd[1033]: lxc_health: Link UP Jul 15 11:36:07.420519 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 15 11:36:07.420769 systemd-networkd[1033]: lxc_health: Gained carrier Jul 15 11:36:08.953747 systemd-networkd[1033]: lxc_health: Gained IPv6LL Jul 15 11:36:08.954152 kubelet[1918]: E0715 11:36:08.954137 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:09.091355 kubelet[1918]: E0715 11:36:09.091325 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:09.254210 systemd[1]: run-containerd-runc-k8s.io-cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827-runc.72W52Q.mount: Deactivated successfully. Jul 15 11:36:09.645281 kubelet[1918]: E0715 11:36:09.645218 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:10.647041 kubelet[1918]: E0715 11:36:10.647005 1918 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 11:36:13.409426 systemd[1]: run-containerd-runc-k8s.io-cd55440409adcf3764e7999562a45000b5cc597deffad4658a7ab33079dc6827-runc.oE6KMZ.mount: Deactivated successfully. Jul 15 11:36:13.455233 sshd[3736]: pam_unix(sshd:session): session closed for user core Jul 15 11:36:13.457401 systemd[1]: sshd@25-10.0.0.101:22-10.0.0.1:54166.service: Deactivated successfully. Jul 15 11:36:13.458219 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 11:36:13.458777 systemd-logind[1195]: Session 26 logged out. Waiting for processes to exit. Jul 15 11:36:13.459416 systemd-logind[1195]: Removed session 26.