Jul 10 00:34:48.873866 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed Jul 9 23:09:45 -00 2025 Jul 10 00:34:48.873886 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:34:48.873894 kernel: BIOS-provided physical RAM map: Jul 10 00:34:48.873900 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 10 00:34:48.873905 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 10 00:34:48.873910 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 10 00:34:48.873917 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 10 00:34:48.873923 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 10 00:34:48.873930 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 10 00:34:48.873935 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 10 00:34:48.873941 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:34:48.873946 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 10 00:34:48.873952 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:34:48.873958 kernel: NX (Execute Disable) protection: active Jul 10 00:34:48.873966 kernel: SMBIOS 2.8 present. Jul 10 00:34:48.873972 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 10 00:34:48.873978 kernel: Hypervisor detected: KVM Jul 10 00:34:48.873985 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:34:48.873991 kernel: kvm-clock: cpu 0, msr 2519a001, primary cpu clock Jul 10 00:34:48.873997 kernel: kvm-clock: using sched offset of 2462757408 cycles Jul 10 00:34:48.874003 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:34:48.874010 kernel: tsc: Detected 2794.748 MHz processor Jul 10 00:34:48.874016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:34:48.874024 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:34:48.874030 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 10 00:34:48.874037 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:34:48.874043 kernel: Using GB pages for direct mapping Jul 10 00:34:48.874049 kernel: ACPI: Early table checksum verification disabled Jul 10 00:34:48.874056 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 10 00:34:48.874062 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874068 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874074 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874082 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 10 00:34:48.874088 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874094 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874101 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874107 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:34:48.874113 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 10 00:34:48.874119 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 10 00:34:48.874126 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 10 00:34:48.874135 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 10 00:34:48.874142 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 10 00:34:48.874149 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 10 00:34:48.874155 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 10 00:34:48.874162 kernel: No NUMA configuration found Jul 10 00:34:48.874169 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 10 00:34:48.874176 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 10 00:34:48.874183 kernel: Zone ranges: Jul 10 00:34:48.874190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:34:48.874197 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 10 00:34:48.874203 kernel: Normal empty Jul 10 00:34:48.874210 kernel: Movable zone start for each node Jul 10 00:34:48.874216 kernel: Early memory node ranges Jul 10 00:34:48.874223 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 10 00:34:48.874230 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 10 00:34:48.874238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 10 00:34:48.874245 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:34:48.874251 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:34:48.874258 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 10 00:34:48.874265 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:34:48.874271 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:34:48.874278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:34:48.874285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:34:48.874291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:34:48.874298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:34:48.874306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:34:48.874313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:34:48.874320 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:34:48.874326 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:34:48.874333 kernel: TSC deadline timer available Jul 10 00:34:48.874340 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 10 00:34:48.874346 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:34:48.874353 kernel: kvm-guest: setup PV sched yield Jul 10 00:34:48.874359 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 10 00:34:48.874367 kernel: Booting paravirtualized kernel on KVM Jul 10 00:34:48.874374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:34:48.874381 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:34:48.874388 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 10 00:34:48.874395 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 10 00:34:48.874401 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:34:48.874408 kernel: kvm-guest: setup async PF for cpu 0 Jul 10 00:34:48.874415 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Jul 10 00:34:48.874421 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:34:48.874429 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:34:48.874436 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 10 00:34:48.874442 kernel: Policy zone: DMA32 Jul 10 00:34:48.874450 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:34:48.874458 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:34:48.874466 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:34:48.874475 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:34:48.874484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:34:48.874494 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2275K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) Jul 10 00:34:48.874501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:34:48.874508 kernel: ftrace: allocating 34602 entries in 136 pages Jul 10 00:34:48.874515 kernel: ftrace: allocated 136 pages with 2 groups Jul 10 00:34:48.874522 kernel: rcu: Hierarchical RCU implementation. Jul 10 00:34:48.874529 kernel: rcu: RCU event tracing is enabled. Jul 10 00:34:48.874536 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:34:48.874542 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:34:48.874549 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:34:48.874557 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:34:48.874564 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:34:48.874571 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:34:48.874578 kernel: random: crng init done Jul 10 00:34:48.874584 kernel: Console: colour VGA+ 80x25 Jul 10 00:34:48.874591 kernel: printk: console [ttyS0] enabled Jul 10 00:34:48.874597 kernel: ACPI: Core revision 20210730 Jul 10 00:34:48.874604 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:34:48.874611 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:34:48.874619 kernel: x2apic enabled Jul 10 00:34:48.874626 kernel: Switched APIC routing to physical x2apic. Jul 10 00:34:48.874632 kernel: kvm-guest: setup PV IPIs Jul 10 00:34:48.874639 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:34:48.874646 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 10 00:34:48.874653 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 00:34:48.874659 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:34:48.874666 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:34:48.874673 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:34:48.874685 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:34:48.874692 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:34:48.874700 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:34:48.874708 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:34:48.874715 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:34:48.874722 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:34:48.874729 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 10 00:34:48.874736 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:34:48.874744 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:34:48.874752 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:34:48.874771 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:34:48.874778 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 10 00:34:48.874785 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:34:48.874792 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:34:48.874800 kernel: LSM: Security Framework initializing Jul 10 00:34:48.874806 kernel: SELinux: Initializing. Jul 10 00:34:48.874815 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:34:48.874823 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:34:48.874830 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:34:48.874837 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:34:48.874851 kernel: ... version: 0 Jul 10 00:34:48.874858 kernel: ... bit width: 48 Jul 10 00:34:48.874866 kernel: ... generic registers: 6 Jul 10 00:34:48.874873 kernel: ... value mask: 0000ffffffffffff Jul 10 00:34:48.874880 kernel: ... max period: 00007fffffffffff Jul 10 00:34:48.874889 kernel: ... fixed-purpose events: 0 Jul 10 00:34:48.874896 kernel: ... event mask: 000000000000003f Jul 10 00:34:48.874903 kernel: signal: max sigframe size: 1776 Jul 10 00:34:48.874910 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:34:48.874917 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:34:48.874924 kernel: x86: Booting SMP configuration: Jul 10 00:34:48.874931 kernel: .... node #0, CPUs: #1 Jul 10 00:34:48.874938 kernel: kvm-clock: cpu 1, msr 2519a041, secondary cpu clock Jul 10 00:34:48.874945 kernel: kvm-guest: setup async PF for cpu 1 Jul 10 00:34:48.874952 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Jul 10 00:34:48.874961 kernel: #2 Jul 10 00:34:48.874968 kernel: kvm-clock: cpu 2, msr 2519a081, secondary cpu clock Jul 10 00:34:48.874975 kernel: kvm-guest: setup async PF for cpu 2 Jul 10 00:34:48.874983 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Jul 10 00:34:48.874990 kernel: #3 Jul 10 00:34:48.874997 kernel: kvm-clock: cpu 3, msr 2519a0c1, secondary cpu clock Jul 10 00:34:48.875004 kernel: kvm-guest: setup async PF for cpu 3 Jul 10 00:34:48.875011 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Jul 10 00:34:48.875018 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:34:48.875026 kernel: smpboot: Max logical packages: 1 Jul 10 00:34:48.875033 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 00:34:48.875040 kernel: devtmpfs: initialized Jul 10 00:34:48.875047 kernel: x86/mm: Memory block size: 128MB Jul 10 00:34:48.875054 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:34:48.875061 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:34:48.875068 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:34:48.875076 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:34:48.875083 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:34:48.875091 kernel: audit: type=2000 audit(1752107688.062:1): state=initialized audit_enabled=0 res=1 Jul 10 00:34:48.875098 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:34:48.875105 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:34:48.875112 kernel: cpuidle: using governor menu Jul 10 00:34:48.875119 kernel: ACPI: bus type PCI registered Jul 10 00:34:48.875126 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:34:48.875133 kernel: dca service started, version 1.12.1 Jul 10 00:34:48.875140 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 10 00:34:48.875148 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 10 00:34:48.875156 kernel: PCI: Using configuration type 1 for base access Jul 10 00:34:48.875163 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:34:48.875170 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:34:48.875177 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:34:48.875184 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:34:48.875191 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:34:48.875198 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:34:48.875205 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:34:48.875212 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:34:48.875221 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:34:48.875228 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:34:48.875235 kernel: ACPI: Interpreter enabled Jul 10 00:34:48.875242 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 00:34:48.875249 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:34:48.875256 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:34:48.875263 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:34:48.875270 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:34:48.875383 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:34:48.875458 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:34:48.875538 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:34:48.875548 kernel: PCI host bridge to bus 0000:00 Jul 10 00:34:48.875621 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:34:48.875684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:34:48.875744 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:34:48.875823 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 10 00:34:48.875895 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 00:34:48.875983 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 10 00:34:48.876057 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:34:48.876136 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 10 00:34:48.876216 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 10 00:34:48.876312 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 10 00:34:48.876390 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 10 00:34:48.876461 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 10 00:34:48.876547 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:34:48.876625 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:34:48.876696 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 10 00:34:48.876825 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 10 00:34:48.876909 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 10 00:34:48.876992 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 10 00:34:48.877064 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 10 00:34:48.877134 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 10 00:34:48.877202 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 10 00:34:48.877278 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 10 00:34:48.877348 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 10 00:34:48.881868 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 10 00:34:48.882017 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 10 00:34:48.882115 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 10 00:34:48.882223 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 10 00:34:48.882320 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:34:48.882423 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 10 00:34:48.882523 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 10 00:34:48.882623 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 10 00:34:48.882727 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 10 00:34:48.882900 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 10 00:34:48.882916 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:34:48.882928 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:34:48.882939 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:34:48.882949 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:34:48.882959 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:34:48.882973 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:34:48.882983 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:34:48.882993 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:34:48.883004 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:34:48.883014 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:34:48.883024 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:34:48.883034 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:34:48.883044 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:34:48.883055 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:34:48.883067 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:34:48.883077 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:34:48.883088 kernel: iommu: Default domain type: Translated Jul 10 00:34:48.883098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:34:48.883200 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:34:48.883297 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:34:48.883394 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:34:48.883409 kernel: vgaarb: loaded Jul 10 00:34:48.883422 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:34:48.883433 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:34:48.883443 kernel: PTP clock support registered Jul 10 00:34:48.883453 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:34:48.883464 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:34:48.883474 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 10 00:34:48.883484 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 10 00:34:48.883494 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:34:48.883504 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:34:48.883516 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:34:48.883527 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:34:48.883537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:34:48.883548 kernel: pnp: PnP ACPI init Jul 10 00:34:48.883653 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 10 00:34:48.883670 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:34:48.883681 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:34:48.883691 kernel: NET: Registered PF_INET protocol family Jul 10 00:34:48.883702 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:34:48.883716 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:34:48.883726 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:34:48.883737 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:34:48.883747 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:34:48.883770 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:34:48.883781 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:34:48.883791 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:34:48.883802 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:34:48.883814 kernel: NET: Registered PF_XDP protocol family Jul 10 00:34:48.883920 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:34:48.884005 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:34:48.884091 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:34:48.884173 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 10 00:34:48.884257 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 10 00:34:48.884360 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 10 00:34:48.884376 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:34:48.884386 kernel: Initialise system trusted keyrings Jul 10 00:34:48.884400 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:34:48.884411 kernel: Key type asymmetric registered Jul 10 00:34:48.884421 kernel: Asymmetric key parser 'x509' registered Jul 10 00:34:48.884445 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:34:48.884455 kernel: io scheduler mq-deadline registered Jul 10 00:34:48.884466 kernel: io scheduler kyber registered Jul 10 00:34:48.884476 kernel: io scheduler bfq registered Jul 10 00:34:48.884487 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:34:48.884498 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:34:48.884523 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:34:48.884534 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:34:48.884544 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:34:48.884555 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:34:48.884574 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:34:48.884589 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:34:48.884599 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:34:48.884730 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:34:48.884768 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:34:48.884892 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:34:48.885001 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:34:48 UTC (1752107688) Jul 10 00:34:48.885109 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 10 00:34:48.885124 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:34:48.885136 kernel: Segment Routing with IPv6 Jul 10 00:34:48.885146 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:34:48.885165 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:34:48.885180 kernel: Key type dns_resolver registered Jul 10 00:34:48.885194 kernel: IPI shorthand broadcast: enabled Jul 10 00:34:48.885204 kernel: sched_clock: Marking stable (394217970, 98299699)->(539400489, -46882820) Jul 10 00:34:48.885215 kernel: registered taskstats version 1 Jul 10 00:34:48.885225 kernel: Loading compiled-in X.509 certificates Jul 10 00:34:48.885236 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 6ebecdd7757c0df63fc51731f0b99957f4e4af16' Jul 10 00:34:48.885246 kernel: Key type .fscrypt registered Jul 10 00:34:48.885256 kernel: Key type fscrypt-provisioning registered Jul 10 00:34:48.885267 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:34:48.885292 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:34:48.885303 kernel: ima: No architecture policies found Jul 10 00:34:48.885313 kernel: clk: Disabling unused clocks Jul 10 00:34:48.885324 kernel: Freeing unused kernel image (initmem) memory: 47472K Jul 10 00:34:48.885334 kernel: Write protecting the kernel read-only data: 28672k Jul 10 00:34:48.885345 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 10 00:34:48.885363 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Jul 10 00:34:48.885378 kernel: Run /init as init process Jul 10 00:34:48.885388 kernel: with arguments: Jul 10 00:34:48.885398 kernel: /init Jul 10 00:34:48.885411 kernel: with environment: Jul 10 00:34:48.885421 kernel: HOME=/ Jul 10 00:34:48.885431 kernel: TERM=linux Jul 10 00:34:48.885441 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:34:48.885469 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:34:48.885483 systemd[1]: Detected virtualization kvm. Jul 10 00:34:48.885495 systemd[1]: Detected architecture x86-64. Jul 10 00:34:48.885507 systemd[1]: Running in initrd. Jul 10 00:34:48.885532 systemd[1]: No hostname configured, using default hostname. Jul 10 00:34:48.885543 systemd[1]: Hostname set to . Jul 10 00:34:48.885555 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:34:48.885578 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:34:48.885590 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:34:48.885600 systemd[1]: Reached target cryptsetup.target. Jul 10 00:34:48.885611 systemd[1]: Reached target paths.target. Jul 10 00:34:48.885633 systemd[1]: Reached target slices.target. Jul 10 00:34:48.885648 systemd[1]: Reached target swap.target. Jul 10 00:34:48.885668 systemd[1]: Reached target timers.target. Jul 10 00:34:48.885694 systemd[1]: Listening on iscsid.socket. Jul 10 00:34:48.885706 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:34:48.885717 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:34:48.885742 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:34:48.885755 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:34:48.885776 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:34:48.885788 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:34:48.885807 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:34:48.885823 systemd[1]: Reached target sockets.target. Jul 10 00:34:48.885834 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:34:48.885864 systemd[1]: Finished network-cleanup.service. Jul 10 00:34:48.885880 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:34:48.885894 systemd[1]: Starting systemd-journald.service... Jul 10 00:34:48.885913 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:34:48.885930 systemd[1]: Starting systemd-resolved.service... Jul 10 00:34:48.885941 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:34:48.885953 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:34:48.885964 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:34:48.885989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:34:48.886001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:34:48.886016 systemd-journald[197]: Journal started Jul 10 00:34:48.886090 systemd-journald[197]: Runtime Journal (/run/log/journal/7740f9d7ca804bdcb3125916ef1c437c) is 6.0M, max 48.5M, 42.5M free. Jul 10 00:34:48.873176 systemd-modules-load[198]: Inserted module 'overlay' Jul 10 00:34:48.910478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:34:48.910497 systemd[1]: Started systemd-journald.service. Jul 10 00:34:48.910508 kernel: audit: type=1130 audit(1752107688.908:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.894547 systemd-resolved[199]: Positive Trust Anchors: Jul 10 00:34:48.894555 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:34:48.920683 kernel: Bridge firewalling registered Jul 10 00:34:48.920700 kernel: audit: type=1130 audit(1752107688.914:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.920712 kernel: audit: type=1130 audit(1752107688.916:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.894592 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:34:48.897365 systemd-resolved[199]: Defaulting to hostname 'linux'. Jul 10 00:34:48.934949 kernel: audit: type=1130 audit(1752107688.923:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.914486 systemd[1]: Started systemd-resolved.service. Jul 10 00:34:48.916625 systemd-modules-load[198]: Inserted module 'br_netfilter' Jul 10 00:34:48.916830 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:34:48.923553 systemd[1]: Reached target nss-lookup.target. Jul 10 00:34:48.934537 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:34:48.949959 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:34:48.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.951398 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:34:48.955852 kernel: audit: type=1130 audit(1752107688.950:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.958505 dracut-cmdline[217]: dracut-dracut-053 Jul 10 00:34:48.960198 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 00:34:48.965779 kernel: SCSI subsystem initialized Jul 10 00:34:48.977186 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:34:48.977214 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:34:48.978437 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:34:48.981182 systemd-modules-load[198]: Inserted module 'dm_multipath' Jul 10 00:34:48.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.981953 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:34:48.986898 kernel: audit: type=1130 audit(1752107688.983:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.984419 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:48.995697 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:49.000216 kernel: audit: type=1130 audit(1752107688.996:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:48.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.007780 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:34:49.023791 kernel: iscsi: registered transport (tcp) Jul 10 00:34:49.044997 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:34:49.045027 kernel: QLogic iSCSI HBA Driver Jul 10 00:34:49.073245 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:34:49.078751 kernel: audit: type=1130 audit(1752107689.074:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.075099 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:34:49.119796 kernel: raid6: avx2x4 gen() 30735 MB/s Jul 10 00:34:49.136797 kernel: raid6: avx2x4 xor() 8176 MB/s Jul 10 00:34:49.153803 kernel: raid6: avx2x2 gen() 31371 MB/s Jul 10 00:34:49.170814 kernel: raid6: avx2x2 xor() 18552 MB/s Jul 10 00:34:49.187820 kernel: raid6: avx2x1 gen() 25768 MB/s Jul 10 00:34:49.204791 kernel: raid6: avx2x1 xor() 15262 MB/s Jul 10 00:34:49.221792 kernel: raid6: sse2x4 gen() 14666 MB/s Jul 10 00:34:49.238790 kernel: raid6: sse2x4 xor() 7445 MB/s Jul 10 00:34:49.255790 kernel: raid6: sse2x2 gen() 16223 MB/s Jul 10 00:34:49.272803 kernel: raid6: sse2x2 xor() 9718 MB/s Jul 10 00:34:49.289795 kernel: raid6: sse2x1 gen() 12145 MB/s Jul 10 00:34:49.307176 kernel: raid6: sse2x1 xor() 7723 MB/s Jul 10 00:34:49.307238 kernel: raid6: using algorithm avx2x2 gen() 31371 MB/s Jul 10 00:34:49.307253 kernel: raid6: .... xor() 18552 MB/s, rmw enabled Jul 10 00:34:49.307838 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:34:49.320795 kernel: xor: automatically using best checksumming function avx Jul 10 00:34:49.409787 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 10 00:34:49.416610 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:34:49.421005 kernel: audit: type=1130 audit(1752107689.415:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.420000 audit: BPF prog-id=7 op=LOAD Jul 10 00:34:49.420000 audit: BPF prog-id=8 op=LOAD Jul 10 00:34:49.421252 systemd[1]: Starting systemd-udevd.service... Jul 10 00:34:49.433360 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 10 00:34:49.437158 systemd[1]: Started systemd-udevd.service. Jul 10 00:34:49.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.439661 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:34:49.448488 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 10 00:34:49.469169 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:34:49.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.470741 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:34:49.505887 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:34:49.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:49.543141 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:34:49.560340 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:34:49.560356 kernel: AVX2 version of gcm_enc/dec engaged. Jul 10 00:34:49.560365 kernel: AES CTR mode by8 optimization enabled Jul 10 00:34:49.560378 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:34:49.560387 kernel: GPT:9289727 != 19775487 Jul 10 00:34:49.560395 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:34:49.560404 kernel: GPT:9289727 != 19775487 Jul 10 00:34:49.560412 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:34:49.560420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:49.560777 kernel: libata version 3.00 loaded. Jul 10 00:34:49.572848 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:34:49.587001 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:34:49.587019 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 10 00:34:49.587109 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:34:49.587182 kernel: scsi host0: ahci Jul 10 00:34:49.587283 kernel: scsi host1: ahci Jul 10 00:34:49.587440 kernel: scsi host2: ahci Jul 10 00:34:49.587525 kernel: scsi host3: ahci Jul 10 00:34:49.587618 kernel: scsi host4: ahci Jul 10 00:34:49.587723 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (439) Jul 10 00:34:49.587736 kernel: scsi host5: ahci Jul 10 00:34:49.587872 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 10 00:34:49.587883 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 10 00:34:49.587892 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 10 00:34:49.587900 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 10 00:34:49.587912 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 10 00:34:49.587921 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 10 00:34:49.580882 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:34:49.619969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:34:49.628952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:34:49.631790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:34:49.636896 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:34:49.637490 systemd[1]: Starting disk-uuid.service... Jul 10 00:34:49.646887 disk-uuid[526]: Primary Header is updated. Jul 10 00:34:49.646887 disk-uuid[526]: Secondary Entries is updated. Jul 10 00:34:49.646887 disk-uuid[526]: Secondary Header is updated. Jul 10 00:34:49.651332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:49.654789 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:49.656785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:49.900417 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:49.900496 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:49.900510 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:49.900522 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:49.901791 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:34:49.902790 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:34:49.904753 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:34:49.904776 kernel: ata3.00: applying bridge limits Jul 10 00:34:49.906426 kernel: ata3.00: configured for UDMA/100 Jul 10 00:34:49.906794 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:34:49.939804 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:34:49.957444 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:34:49.957462 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:34:50.654807 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:34:50.654882 disk-uuid[527]: The operation has completed successfully. Jul 10 00:34:50.677473 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:34:50.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.677547 systemd[1]: Finished disk-uuid.service. Jul 10 00:34:50.686095 systemd[1]: Starting verity-setup.service... Jul 10 00:34:50.698774 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 10 00:34:50.718081 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:34:50.719448 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:34:50.722174 systemd[1]: Finished verity-setup.service. Jul 10 00:34:50.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.779779 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:34:50.779771 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:34:50.781308 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:34:50.783178 systemd[1]: Starting ignition-setup.service... Jul 10 00:34:50.785259 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:34:50.793157 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:34:50.793186 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:34:50.793196 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:34:50.801176 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:34:50.809107 systemd[1]: Finished ignition-setup.service. Jul 10 00:34:50.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.811929 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:34:50.849716 ignition[648]: Ignition 2.14.0 Jul 10 00:34:50.849727 ignition[648]: Stage: fetch-offline Jul 10 00:34:50.849815 ignition[648]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:50.849824 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:50.849912 ignition[648]: parsed url from cmdline: "" Jul 10 00:34:50.849916 ignition[648]: no config URL provided Jul 10 00:34:50.849920 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:34:50.854633 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:34:50.849927 ignition[648]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:34:50.849944 ignition[648]: op(1): [started] loading QEMU firmware config module Jul 10 00:34:50.849948 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:34:50.855818 ignition[648]: op(1): [finished] loading QEMU firmware config module Jul 10 00:34:50.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.862095 systemd[1]: Starting systemd-networkd.service... Jul 10 00:34:50.860000 audit: BPF prog-id=9 op=LOAD Jul 10 00:34:50.898476 ignition[648]: parsing config with SHA512: 7a66c2a4eed1ee6ddbaefa3c22928e272cedb85ee460f4cca0ee4c993ed789c063ff27d3e12269b0be26b351bd3c3de93f592e8060b9546b8a3d7f2887fac711 Jul 10 00:34:50.905903 unknown[648]: fetched base config from "system" Jul 10 00:34:50.905914 unknown[648]: fetched user config from "qemu" Jul 10 00:34:50.906391 ignition[648]: fetch-offline: fetch-offline passed Jul 10 00:34:50.906435 ignition[648]: Ignition finished successfully Jul 10 00:34:50.910304 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:34:50.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.916973 systemd-networkd[717]: lo: Link UP Jul 10 00:34:50.916987 systemd-networkd[717]: lo: Gained carrier Jul 10 00:34:50.917537 systemd-networkd[717]: Enumeration completed Jul 10 00:34:50.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.917800 systemd[1]: Started systemd-networkd.service. Jul 10 00:34:50.917847 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:34:50.920296 systemd-networkd[717]: eth0: Link UP Jul 10 00:34:50.920299 systemd-networkd[717]: eth0: Gained carrier Jul 10 00:34:50.920435 systemd[1]: Reached target network.target. Jul 10 00:34:50.921954 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:34:50.927609 systemd[1]: Starting ignition-kargs.service... Jul 10 00:34:50.929555 systemd[1]: Starting iscsiuio.service... Jul 10 00:34:50.933570 systemd[1]: Started iscsiuio.service. Jul 10 00:34:50.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.934909 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:34:50.936824 systemd[1]: Starting iscsid.service... Jul 10 00:34:50.937165 ignition[719]: Ignition 2.14.0 Jul 10 00:34:50.937172 ignition[719]: Stage: kargs Jul 10 00:34:50.937263 ignition[719]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:50.937273 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:50.941531 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:34:50.941531 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:34:50.941531 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:34:50.941531 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:34:50.941531 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:34:50.941531 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:34:50.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.938293 ignition[719]: kargs: kargs passed Jul 10 00:34:50.941676 systemd[1]: Started iscsid.service. Jul 10 00:34:50.938325 ignition[719]: Ignition finished successfully Jul 10 00:34:50.943743 systemd[1]: Finished ignition-kargs.service. Jul 10 00:34:50.949855 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:34:50.957608 ignition[730]: Ignition 2.14.0 Jul 10 00:34:50.951271 systemd[1]: Starting ignition-disks.service... Jul 10 00:34:50.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.957614 ignition[730]: Stage: disks Jul 10 00:34:50.959291 systemd[1]: Finished ignition-disks.service. Jul 10 00:34:50.957693 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:50.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.960464 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:34:50.957703 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:50.961925 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:34:50.958529 ignition[730]: disks: disks passed Jul 10 00:34:50.962705 systemd[1]: Reached target local-fs.target. Jul 10 00:34:50.958563 ignition[730]: Ignition finished successfully Jul 10 00:34:50.963955 systemd[1]: Reached target sysinit.target. Jul 10 00:34:50.964676 systemd[1]: Reached target basic.target. Jul 10 00:34:50.965624 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:34:50.966937 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:34:50.968450 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:34:50.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.969271 systemd[1]: Reached target remote-fs.target. Jul 10 00:34:50.971591 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:34:50.978203 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:34:50.979849 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:34:50.988304 systemd-resolved[199]: Detected conflict on linux IN A 10.0.0.23 Jul 10 00:34:50.988320 systemd-resolved[199]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jul 10 00:34:50.991802 systemd-fsck[751]: ROOT: clean, 619/553520 files, 56023/553472 blocks Jul 10 00:34:50.996686 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:34:50.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:50.997286 systemd[1]: Mounting sysroot.mount... Jul 10 00:34:51.004416 systemd[1]: Mounted sysroot.mount. Jul 10 00:34:51.006542 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:34:51.005141 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:34:51.007539 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:34:51.008415 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:34:51.008441 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:34:51.008460 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:34:51.010281 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:34:51.012110 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:34:51.016503 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:34:51.019163 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:34:51.022415 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:34:51.025882 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:34:51.052432 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:34:51.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:51.054837 systemd[1]: Starting ignition-mount.service... Jul 10 00:34:51.056745 systemd[1]: Starting sysroot-boot.service... Jul 10 00:34:51.059926 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:34:51.068121 ignition[803]: INFO : Ignition 2.14.0 Jul 10 00:34:51.068121 ignition[803]: INFO : Stage: mount Jul 10 00:34:51.069603 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:51.069603 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:51.069603 ignition[803]: INFO : mount: mount passed Jul 10 00:34:51.069603 ignition[803]: INFO : Ignition finished successfully Jul 10 00:34:51.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:51.069818 systemd[1]: Finished ignition-mount.service. Jul 10 00:34:51.076168 systemd[1]: Finished sysroot-boot.service. Jul 10 00:34:51.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:51.729270 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:34:51.736791 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Jul 10 00:34:51.736817 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:34:51.738370 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:34:51.738390 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:34:51.742037 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:34:51.743370 systemd[1]: Starting ignition-files.service... Jul 10 00:34:51.756514 ignition[832]: INFO : Ignition 2.14.0 Jul 10 00:34:51.756514 ignition[832]: INFO : Stage: files Jul 10 00:34:51.758018 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:51.758018 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:51.761007 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:34:51.762224 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:34:51.762224 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:34:51.765054 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:34:51.765054 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:34:51.765054 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:34:51.764977 unknown[832]: wrote ssh authorized keys file for user: core Jul 10 00:34:51.769718 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:34:51.769718 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:34:51.769718 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:34:51.769718 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:34:51.811954 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:34:51.927607 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:34:51.929562 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:34:51.931153 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:34:52.148024 systemd-networkd[717]: eth0: Gained IPv6LL Jul 10 00:34:52.446716 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 10 00:34:52.529931 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:34:52.532111 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:34:53.140946 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 10 00:34:53.497720 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:34:53.497720 ignition[832]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 10 00:34:53.501620 ignition[832]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:34:53.504155 ignition[832]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:34:53.504155 ignition[832]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 10 00:34:53.504155 ignition[832]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 10 00:34:53.509258 ignition[832]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:34:53.511323 ignition[832]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:34:53.511323 ignition[832]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 10 00:34:53.514625 ignition[832]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jul 10 00:34:53.516284 ignition[832]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:34:53.518856 ignition[832]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:34:53.518856 ignition[832]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jul 10 00:34:53.522391 ignition[832]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:34:53.523919 ignition[832]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:34:53.525400 ignition[832]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:34:53.525400 ignition[832]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:34:53.547019 ignition[832]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:34:53.548873 ignition[832]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:34:53.550640 ignition[832]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:34:53.552620 ignition[832]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:34:53.554480 ignition[832]: INFO : files: files passed Jul 10 00:34:53.555310 ignition[832]: INFO : Ignition finished successfully Jul 10 00:34:53.557558 systemd[1]: Finished ignition-files.service. Jul 10 00:34:53.564125 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 10 00:34:53.564168 kernel: audit: type=1130 audit(1752107693.557:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.559278 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:34:53.564098 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:34:53.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.569569 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:34:53.574900 kernel: audit: type=1130 audit(1752107693.568:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.574927 kernel: audit: type=1130 audit(1752107693.574:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.564613 systemd[1]: Starting ignition-quench.service... Jul 10 00:34:53.583114 kernel: audit: type=1131 audit(1752107693.574:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.583286 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:34:53.566371 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:34:53.569618 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:34:53.569681 systemd[1]: Finished ignition-quench.service. Jul 10 00:34:53.574963 systemd[1]: Reached target ignition-complete.target. Jul 10 00:34:53.581870 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:34:53.592526 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:34:53.592598 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:34:53.601836 kernel: audit: type=1130 audit(1752107693.593:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.602640 kernel: audit: type=1131 audit(1752107693.593:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.594610 systemd[1]: Reached target initrd-fs.target. Jul 10 00:34:53.601810 systemd[1]: Reached target initrd.target. Jul 10 00:34:53.602666 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:34:53.603254 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:34:53.612261 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:34:53.617299 kernel: audit: type=1130 audit(1752107693.612:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.613594 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:34:53.621370 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:34:53.659796 kernel: audit: type=1131 audit(1752107693.621:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.659823 kernel: audit: type=1131 audit(1752107693.626:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.659836 kernel: audit: type=1131 audit(1752107693.629:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.621496 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:34:53.662652 ignition[873]: INFO : Ignition 2.14.0 Jul 10 00:34:53.662652 ignition[873]: INFO : Stage: umount Jul 10 00:34:53.662652 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:34:53.662652 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:34:53.662652 ignition[873]: INFO : umount: umount passed Jul 10 00:34:53.662652 ignition[873]: INFO : Ignition finished successfully Jul 10 00:34:53.663000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:34:53.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.621674 systemd[1]: Stopped target timers.target. Jul 10 00:34:53.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.622009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:34:53.622086 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:34:53.622231 systemd[1]: Stopped target initrd.target. Jul 10 00:34:53.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.625368 systemd[1]: Stopped target basic.target. Jul 10 00:34:53.625522 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:34:53.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.625681 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:34:53.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.626012 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:34:53.626174 systemd[1]: Stopped target remote-fs.target. Jul 10 00:34:53.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.626333 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:34:53.626501 systemd[1]: Stopped target sysinit.target. Jul 10 00:34:53.626659 systemd[1]: Stopped target local-fs.target. Jul 10 00:34:53.626989 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:34:53.627147 systemd[1]: Stopped target swap.target. Jul 10 00:34:53.627285 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:34:53.627361 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:34:53.627520 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:34:53.630509 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:34:53.630586 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:34:53.630709 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:34:53.630809 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:34:53.634154 systemd[1]: Stopped target paths.target. Jul 10 00:34:53.634236 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:34:53.638796 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:34:53.638929 systemd[1]: Stopped target slices.target. Jul 10 00:34:53.639087 systemd[1]: Stopped target sockets.target. Jul 10 00:34:53.639250 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:34:53.639307 systemd[1]: Closed iscsid.socket. Jul 10 00:34:53.639439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:34:53.639517 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:34:53.639616 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:34:53.639688 systemd[1]: Stopped ignition-files.service. Jul 10 00:34:53.640605 systemd[1]: Stopping ignition-mount.service... Jul 10 00:34:53.640904 systemd[1]: Stopping iscsiuio.service... Jul 10 00:34:53.641617 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:34:53.642101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:34:53.642217 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:34:53.642384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:34:53.642486 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:34:53.645605 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:34:53.645683 systemd[1]: Stopped iscsiuio.service. Jul 10 00:34:53.646987 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:34:53.647053 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:34:53.647743 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:34:53.647777 systemd[1]: Closed iscsiuio.socket. Jul 10 00:34:53.650042 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:34:53.650137 systemd[1]: Stopped ignition-mount.service. Jul 10 00:34:53.650633 systemd[1]: Stopped target network.target. Jul 10 00:34:53.650699 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:34:53.650743 systemd[1]: Stopped ignition-disks.service. Jul 10 00:34:53.651057 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:34:53.651086 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:34:53.651209 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:34:53.651236 systemd[1]: Stopped ignition-setup.service. Jul 10 00:34:53.651459 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:34:53.651565 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:34:53.657284 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:34:53.660064 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:34:53.660141 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:34:53.663905 systemd-networkd[717]: eth0: DHCPv6 lease lost Jul 10 00:34:53.740000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:34:53.664805 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:34:53.664878 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:34:53.668266 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:34:53.668293 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:34:53.670539 systemd[1]: Stopping network-cleanup.service... Jul 10 00:34:53.672041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:34:53.672081 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:34:53.673106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:34:53.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.673140 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:34:53.674861 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:34:53.674893 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:34:53.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:53.676102 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:34:53.679170 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:34:53.682110 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:34:53.682193 systemd[1]: Stopped network-cleanup.service. Jul 10 00:34:53.684543 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:34:53.684645 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:34:53.763000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:34:53.763000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:34:53.763000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:34:53.686620 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:34:53.763000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:34:53.763000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:34:53.686651 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:34:53.686740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:34:53.686779 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:34:53.686903 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:34:53.686931 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:34:53.687087 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:34:53.687112 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:34:53.687241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:34:53.687265 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:34:53.687983 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:34:53.688201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:34:53.688238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 10 00:34:53.691875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:34:53.691905 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:34:53.693437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:34:53.783102 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Jul 10 00:34:53.783133 iscsid[728]: iscsid shutting down. Jul 10 00:34:53.693467 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:34:53.695024 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:34:53.695325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:34:53.695392 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:34:53.750294 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:34:53.750408 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:34:53.752311 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:34:53.754083 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:34:53.754122 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:34:53.757117 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:34:53.762711 systemd[1]: Switching root. Jul 10 00:34:53.793302 systemd-journald[197]: Journal stopped Jul 10 00:34:56.592049 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:34:56.592086 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:34:56.592103 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:34:56.592115 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:34:56.592125 kernel: SELinux: policy capability open_perms=1 Jul 10 00:34:56.592134 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:34:56.592146 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:34:56.592156 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:34:56.592165 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:34:56.592174 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:34:56.592184 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:34:56.592197 systemd[1]: Successfully loaded SELinux policy in 42.492ms. Jul 10 00:34:56.592210 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.878ms. Jul 10 00:34:56.592221 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:34:56.592232 systemd[1]: Detected virtualization kvm. Jul 10 00:34:56.592242 systemd[1]: Detected architecture x86-64. Jul 10 00:34:56.592252 systemd[1]: Detected first boot. Jul 10 00:34:56.592262 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:34:56.592273 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:34:56.592284 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:34:56.592294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:56.592307 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:56.592318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:56.592329 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:34:56.592339 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:34:56.592351 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:34:56.592361 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:34:56.592372 systemd[1]: Created slice system-getty.slice. Jul 10 00:34:56.592383 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:34:56.592393 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:34:56.592403 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:34:56.592413 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:34:56.592423 systemd[1]: Created slice user.slice. Jul 10 00:34:56.592433 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:34:56.592443 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:34:56.592453 systemd[1]: Set up automount boot.automount. Jul 10 00:34:56.592464 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:34:56.592474 systemd[1]: Reached target integritysetup.target. Jul 10 00:34:56.592486 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:34:56.592496 systemd[1]: Reached target remote-fs.target. Jul 10 00:34:56.592507 systemd[1]: Reached target slices.target. Jul 10 00:34:56.592517 systemd[1]: Reached target swap.target. Jul 10 00:34:56.592527 systemd[1]: Reached target torcx.target. Jul 10 00:34:56.592537 systemd[1]: Reached target veritysetup.target. Jul 10 00:34:56.592549 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:34:56.592558 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:34:56.592569 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:34:56.592578 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:34:56.592589 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:34:56.592600 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:34:56.592610 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:34:56.592620 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:34:56.592630 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:34:56.592640 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:34:56.592660 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:34:56.592671 systemd[1]: Mounting media.mount... Jul 10 00:34:56.592681 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:56.592691 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:34:56.592702 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:34:56.592712 systemd[1]: Mounting tmp.mount... Jul 10 00:34:56.592722 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:34:56.592732 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:56.592742 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:34:56.592791 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:34:56.592802 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:56.592812 systemd[1]: Starting modprobe@drm.service... Jul 10 00:34:56.592823 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:56.592833 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:34:56.592843 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:56.592854 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:34:56.592865 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 00:34:56.592876 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 10 00:34:56.592887 systemd[1]: Starting systemd-journald.service... Jul 10 00:34:56.592898 kernel: loop: module loaded Jul 10 00:34:56.592908 kernel: fuse: init (API version 7.34) Jul 10 00:34:56.592918 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:34:56.592927 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:34:56.592938 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:34:56.592948 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:34:56.592958 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:56.592969 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:34:56.592980 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:34:56.592990 systemd[1]: Mounted media.mount. Jul 10 00:34:56.593001 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:34:56.593011 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:34:56.593020 systemd[1]: Mounted tmp.mount. Jul 10 00:34:56.593030 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:34:56.593043 systemd-journald[1016]: Journal started Jul 10 00:34:56.593079 systemd-journald[1016]: Runtime Journal (/run/log/journal/7740f9d7ca804bdcb3125916ef1c437c) is 6.0M, max 48.5M, 42.5M free. Jul 10 00:34:56.502000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:34:56.502000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 10 00:34:56.590000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:34:56.590000 audit[1016]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff98ba4ea0 a2=4000 a3=7fff98ba4f3c items=0 ppid=1 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:56.590000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:34:56.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.595808 systemd[1]: Started systemd-journald.service. Jul 10 00:34:56.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.596974 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:34:56.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.598110 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:34:56.598344 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:34:56.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.599436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:56.599663 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:56.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.600710 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:34:56.601110 systemd[1]: Finished modprobe@drm.service. Jul 10 00:34:56.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.602249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:56.602485 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:56.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.603567 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:34:56.603814 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:34:56.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.604844 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:56.605030 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:56.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.606239 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:34:56.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.607513 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:34:56.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.608870 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:34:56.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.610044 systemd[1]: Reached target network-pre.target. Jul 10 00:34:56.611948 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:34:56.614001 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:34:56.614944 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:34:56.617178 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:34:56.620066 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:34:56.621411 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:56.622417 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:34:56.623648 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:56.624934 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:34:56.626023 systemd-journald[1016]: Time spent on flushing to /var/log/journal/7740f9d7ca804bdcb3125916ef1c437c is 12.568ms for 1041 entries. Jul 10 00:34:56.626023 systemd-journald[1016]: System Journal (/var/log/journal/7740f9d7ca804bdcb3125916ef1c437c) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:34:56.727103 systemd-journald[1016]: Received client request to flush runtime journal. Jul 10 00:34:56.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:56.627967 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:34:56.630659 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:34:56.631742 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:34:56.727816 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:34:56.632923 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:34:56.634886 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:34:56.676006 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:34:56.677144 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:34:56.679046 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:34:56.693582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:34:56.706700 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:34:56.707951 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:34:56.728204 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:34:56.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.106701 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:34:57.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.109192 systemd[1]: Starting systemd-udevd.service... Jul 10 00:34:57.125510 systemd-udevd[1070]: Using default interface naming scheme 'v252'. Jul 10 00:34:57.138154 systemd[1]: Started systemd-udevd.service. Jul 10 00:34:57.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.143295 systemd[1]: Starting systemd-networkd.service... Jul 10 00:34:57.147527 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:34:57.171468 systemd[1]: Found device dev-ttyS0.device. Jul 10 00:34:57.188222 systemd[1]: Started systemd-userdbd.service. Jul 10 00:34:57.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.192950 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:34:57.209813 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 10 00:34:57.217786 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:34:57.227068 systemd-networkd[1081]: lo: Link UP Jul 10 00:34:57.227342 systemd-networkd[1081]: lo: Gained carrier Jul 10 00:34:57.227750 systemd-networkd[1081]: Enumeration completed Jul 10 00:34:57.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.228002 systemd[1]: Started systemd-networkd.service. Jul 10 00:34:57.228009 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:34:57.229238 systemd-networkd[1081]: eth0: Link UP Jul 10 00:34:57.229242 systemd-networkd[1081]: eth0: Gained carrier Jul 10 00:34:57.229000 audit[1072]: AVC avc: denied { confidentiality } for pid=1072 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 00:34:57.229000 audit[1072]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555c575c48a0 a1=338ac a2=7fd114466bc5 a3=5 items=110 ppid=1070 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:57.229000 audit: CWD cwd="/" Jul 10 00:34:57.229000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=1 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=2 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=3 name=(null) inode=13850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=4 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=5 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=6 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=7 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=8 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=9 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=10 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=11 name=(null) inode=13854 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=12 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=13 name=(null) inode=13855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=14 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=15 name=(null) inode=13856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=16 name=(null) inode=13852 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=17 name=(null) inode=13857 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=18 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=19 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=20 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=21 name=(null) inode=13859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=22 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=23 name=(null) inode=13860 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=24 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=25 name=(null) inode=13861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=26 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=27 name=(null) inode=13862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=28 name=(null) inode=13858 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=29 name=(null) inode=13863 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=30 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=31 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=32 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=33 name=(null) inode=13865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=34 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=35 name=(null) inode=13866 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=36 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=37 name=(null) inode=13867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=38 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=39 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=40 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=41 name=(null) inode=13869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=42 name=(null) inode=13849 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=43 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=44 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=45 name=(null) inode=13871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=46 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=47 name=(null) inode=13872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=48 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=49 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=50 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=51 name=(null) inode=13874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=52 name=(null) inode=13870 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=53 name=(null) inode=13875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=55 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=56 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=57 name=(null) inode=13877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=58 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=59 name=(null) inode=13878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=60 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=61 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=62 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=63 name=(null) inode=13880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=64 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=65 name=(null) inode=13881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=66 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=67 name=(null) inode=13882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=68 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=69 name=(null) inode=13883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=70 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=71 name=(null) inode=13884 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=72 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=73 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=74 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=75 name=(null) inode=13886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=76 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=77 name=(null) inode=13887 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=78 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=79 name=(null) inode=13888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=80 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=81 name=(null) inode=13889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=82 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=83 name=(null) inode=13890 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=84 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=85 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=86 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=87 name=(null) inode=13892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=88 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=89 name=(null) inode=13893 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=90 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=91 name=(null) inode=13894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=92 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=93 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=94 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=95 name=(null) inode=13896 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=96 name=(null) inode=13876 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=97 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=98 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=99 name=(null) inode=13898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=100 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=101 name=(null) inode=13899 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=102 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=103 name=(null) inode=13900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=104 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=105 name=(null) inode=13901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=106 name=(null) inode=13897 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=107 name=(null) inode=13902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PATH item=109 name=(null) inode=15801 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:34:57.229000 audit: PROCTITLE proctitle="(udev-worker)" Jul 10 00:34:57.244818 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:34:57.253880 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 10 00:34:57.254010 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:34:57.244873 systemd-networkd[1081]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:34:57.258782 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:34:57.267785 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:34:57.305802 kernel: kvm: Nested Virtualization enabled Jul 10 00:34:57.305998 kernel: SVM: kvm: Nested Paging enabled Jul 10 00:34:57.306035 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 10 00:34:57.306070 kernel: SVM: Virtual GIF supported Jul 10 00:34:57.321777 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:34:57.358186 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:34:57.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.360230 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:34:57.367012 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:34:57.391840 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:34:57.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.392843 systemd[1]: Reached target cryptsetup.target. Jul 10 00:34:57.394625 systemd[1]: Starting lvm2-activation.service... Jul 10 00:34:57.397681 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:34:57.422683 systemd[1]: Finished lvm2-activation.service. Jul 10 00:34:57.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.423666 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:34:57.424538 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:34:57.424560 systemd[1]: Reached target local-fs.target. Jul 10 00:34:57.425384 systemd[1]: Reached target machines.target. Jul 10 00:34:57.427210 systemd[1]: Starting ldconfig.service... Jul 10 00:34:57.428316 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:57.428360 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:57.429223 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:34:57.431274 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:34:57.433422 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:34:57.435622 systemd[1]: Starting systemd-sysext.service... Jul 10 00:34:57.436732 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Jul 10 00:34:57.437735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:34:57.439071 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:34:57.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.444225 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:34:57.448100 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:34:57.448393 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:34:57.457810 kernel: loop0: detected capacity change from 0 to 221472 Jul 10 00:34:57.476976 systemd-fsck[1122]: fsck.fat 4.2 (2021-01-31) Jul 10 00:34:57.476976 systemd-fsck[1122]: /dev/vda1: 790 files, 120731/258078 clusters Jul 10 00:34:57.478392 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:34:57.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.481205 systemd[1]: Mounting boot.mount... Jul 10 00:34:57.490341 systemd[1]: Mounted boot.mount. Jul 10 00:34:57.717781 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:34:57.718227 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:34:57.718853 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:34:57.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.720218 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:34:57.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.734778 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 00:34:57.740077 (sd-sysext)[1131]: Using extensions 'kubernetes'. Jul 10 00:34:57.740405 (sd-sysext)[1131]: Merged extensions into '/usr'. Jul 10 00:34:57.756153 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:57.757645 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:34:57.758685 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:57.759664 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:57.761704 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:57.763641 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:57.764712 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:57.764925 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:57.765121 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:57.768179 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:34:57.768819 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:34:57.769455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:57.769596 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:57.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.770930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:57.771046 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:57.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.772293 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:57.772419 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:57.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.773858 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:57.774020 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:57.774834 systemd[1]: Finished systemd-sysext.service. Jul 10 00:34:57.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.776025 systemd[1]: Finished ldconfig.service. Jul 10 00:34:57.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.777889 systemd[1]: Starting ensure-sysext.service... Jul 10 00:34:57.779694 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:34:57.784070 systemd[1]: Reloading. Jul 10 00:34:57.790462 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:34:57.791709 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:34:57.793476 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:34:57.829577 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-10T00:34:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:34:57.829927 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-07-10T00:34:57Z" level=info msg="torcx already run" Jul 10 00:34:57.903343 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:34:57.903362 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:34:57.922541 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:34:57.974588 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:34:57.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.978402 systemd[1]: Starting audit-rules.service... Jul 10 00:34:57.980328 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:34:57.982154 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:34:57.984545 systemd[1]: Starting systemd-resolved.service... Jul 10 00:34:57.986465 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:34:57.988658 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:34:57.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.991133 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:34:57.993000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:34:57.998565 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:57.998780 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.000333 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:58.003022 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:58.004739 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:58.005497 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.005606 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:58.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.005726 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:58.005815 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:58.006752 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:34:58.007988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:58.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.008115 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:58.009269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:58.009381 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:58.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.010668 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:34:58.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.011938 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:58.012057 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:58.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:34:58.014868 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:58.015036 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.016304 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:58.017974 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:58.020861 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:58.021574 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.021695 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:58.023194 augenrules[1249]: No rules Jul 10 00:34:58.022860 systemd[1]: Starting systemd-update-done.service... Jul 10 00:34:58.022000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:34:58.022000 audit[1249]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4a8c6760 a2=420 a3=0 items=0 ppid=1214 pid=1249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:34:58.022000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:34:58.023688 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:58.023794 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:58.024849 systemd[1]: Finished audit-rules.service. Jul 10 00:34:58.025894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:58.026018 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:58.027159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:58.027278 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:58.028473 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:58.028589 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:58.029611 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:58.029700 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.030269 systemd[1]: Finished systemd-update-done.service. Jul 10 00:34:58.033296 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:58.033488 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.034454 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:34:58.036036 systemd[1]: Starting modprobe@drm.service... Jul 10 00:34:58.037991 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:34:58.039746 systemd[1]: Starting modprobe@loop.service... Jul 10 00:34:58.040554 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.040701 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:58.042132 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:34:58.043071 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:34:58.043165 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:34:58.044544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:34:58.044717 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:34:58.045961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:34:58.046101 systemd[1]: Finished modprobe@drm.service. Jul 10 00:34:58.047174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:34:58.047305 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:34:58.048473 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:34:58.048733 systemd[1]: Finished modprobe@loop.service. Jul 10 00:34:58.049755 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:34:58.050062 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.051611 systemd[1]: Finished ensure-sysext.service. Jul 10 00:34:58.079791 systemd-resolved[1219]: Positive Trust Anchors: Jul 10 00:34:58.079805 systemd-resolved[1219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:34:58.079832 systemd-resolved[1219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:34:58.085435 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:34:58.086742 systemd[1]: Reached target time-set.target. Jul 10 00:34:58.086911 systemd-timesyncd[1223]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:34:58.086955 systemd-timesyncd[1223]: Initial clock synchronization to Thu 2025-07-10 00:34:58.091643 UTC. Jul 10 00:34:58.086993 systemd-resolved[1219]: Defaulting to hostname 'linux'. Jul 10 00:34:58.088466 systemd[1]: Started systemd-resolved.service. Jul 10 00:34:58.089323 systemd[1]: Reached target network.target. Jul 10 00:34:58.090097 systemd[1]: Reached target nss-lookup.target. Jul 10 00:34:58.090913 systemd[1]: Reached target sysinit.target. Jul 10 00:34:58.091772 systemd[1]: Started motdgen.path. Jul 10 00:34:58.092484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:34:58.093677 systemd[1]: Started logrotate.timer. Jul 10 00:34:58.094437 systemd[1]: Started mdadm.timer. Jul 10 00:34:58.095095 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:34:58.095910 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:34:58.095976 systemd[1]: Reached target paths.target. Jul 10 00:34:58.096693 systemd[1]: Reached target timers.target. Jul 10 00:34:58.097691 systemd[1]: Listening on dbus.socket. Jul 10 00:34:58.099435 systemd[1]: Starting docker.socket... Jul 10 00:34:58.101227 systemd[1]: Listening on sshd.socket. Jul 10 00:34:58.102057 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:58.102340 systemd[1]: Listening on docker.socket. Jul 10 00:34:58.103113 systemd[1]: Reached target sockets.target. Jul 10 00:34:58.103896 systemd[1]: Reached target basic.target. Jul 10 00:34:58.104742 systemd[1]: System is tainted: cgroupsv1 Jul 10 00:34:58.104820 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.104846 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:34:58.105791 systemd[1]: Starting containerd.service... Jul 10 00:34:58.107720 systemd[1]: Starting dbus.service... Jul 10 00:34:58.109374 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:34:58.111445 systemd[1]: Starting extend-filesystems.service... Jul 10 00:34:58.112401 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:34:58.114153 jq[1276]: false Jul 10 00:34:58.113415 systemd[1]: Starting motdgen.service... Jul 10 00:34:58.115136 systemd[1]: Starting prepare-helm.service... Jul 10 00:34:58.117478 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:34:58.119182 systemd[1]: Starting sshd-keygen.service... Jul 10 00:34:58.121684 systemd[1]: Starting systemd-logind.service... Jul 10 00:34:58.124037 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:34:58.124085 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:34:58.125052 systemd[1]: Starting update-engine.service... Jul 10 00:34:58.126940 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:34:58.129327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:34:58.132194 jq[1291]: true Jul 10 00:34:58.129553 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:34:58.130424 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:34:58.130650 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:34:58.135976 dbus-daemon[1275]: [system] SELinux support is enabled Jul 10 00:34:58.137434 systemd[1]: Started dbus.service. Jul 10 00:34:58.139961 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:34:58.139983 systemd[1]: Reached target system-config.target. Jul 10 00:34:58.141024 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:34:58.141043 systemd[1]: Reached target user-config.target. Jul 10 00:34:58.143608 jq[1298]: true Jul 10 00:34:58.143901 tar[1294]: linux-amd64/helm Jul 10 00:34:58.152066 extend-filesystems[1277]: Found loop1 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found sr0 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda1 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda2 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda3 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found usr Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda4 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda6 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda7 Jul 10 00:34:58.153132 extend-filesystems[1277]: Found vda9 Jul 10 00:34:58.153132 extend-filesystems[1277]: Checking size of /dev/vda9 Jul 10 00:34:58.154527 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:34:58.154785 systemd[1]: Finished motdgen.service. Jul 10 00:34:58.176058 extend-filesystems[1277]: Resized partition /dev/vda9 Jul 10 00:34:58.185238 update_engine[1288]: I0710 00:34:58.185097 1288 main.cc:92] Flatcar Update Engine starting Jul 10 00:34:58.186712 extend-filesystems[1326]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:34:58.186951 systemd[1]: Started update-engine.service. Jul 10 00:34:58.187084 update_engine[1288]: I0710 00:34:58.186971 1288 update_check_scheduler.cc:74] Next update check in 8m51s Jul 10 00:34:58.190044 systemd[1]: Started locksmithd.service. Jul 10 00:34:58.192781 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:34:58.194024 env[1297]: time="2025-07-10T00:34:58.193987988Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:34:58.216260 systemd-logind[1285]: Watching system buttons on /dev/input/event1 (Power Button) Jul 10 00:34:58.216285 systemd-logind[1285]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:34:58.216469 systemd-logind[1285]: New seat seat0. Jul 10 00:34:58.218999 systemd[1]: Started systemd-logind.service. Jul 10 00:34:58.223603 env[1297]: time="2025-07-10T00:34:58.223542483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:34:58.250303 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.223770851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.226271912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.226295105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.226551336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.226574098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.226589437Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:34:58.250345 env[1297]: time="2025-07-10T00:34:58.226600849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:58.229739 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:34:58.251067 extend-filesystems[1326]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:34:58.251067 extend-filesystems[1326]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:34:58.251067 extend-filesystems[1326]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:34:58.255697 env[1297]: time="2025-07-10T00:34:58.250401403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:58.255697 env[1297]: time="2025-07-10T00:34:58.251964795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:34:58.255697 env[1297]: time="2025-07-10T00:34:58.252196219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:34:58.255697 env[1297]: time="2025-07-10T00:34:58.252212079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:34:58.255697 env[1297]: time="2025-07-10T00:34:58.252272001Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:34:58.255697 env[1297]: time="2025-07-10T00:34:58.252281459Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:34:58.251456 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:34:58.255951 extend-filesystems[1277]: Resized filesystem in /dev/vda9 Jul 10 00:34:58.251718 systemd[1]: Finished extend-filesystems.service. Jul 10 00:34:58.261506 bash[1333]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:34:58.261445 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263531928Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263558878Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263570530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263598242Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263622197Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263635692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263646723Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263658385Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263670167Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263684494Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263695985Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263706715Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263801914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:34:58.264100 env[1297]: time="2025-07-10T00:34:58.263867637Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:34:58.264554 env[1297]: time="2025-07-10T00:34:58.264536101Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:34:58.264651 env[1297]: time="2025-07-10T00:34:58.264632762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.264735 env[1297]: time="2025-07-10T00:34:58.264712812Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:34:58.264946 env[1297]: time="2025-07-10T00:34:58.264927405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265022 env[1297]: time="2025-07-10T00:34:58.265004669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265098 env[1297]: time="2025-07-10T00:34:58.265080612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265174 env[1297]: time="2025-07-10T00:34:58.265156504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265251 env[1297]: time="2025-07-10T00:34:58.265232917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265326 env[1297]: time="2025-07-10T00:34:58.265308780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265405 env[1297]: time="2025-07-10T00:34:58.265386996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265485 env[1297]: time="2025-07-10T00:34:58.265467127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265600 env[1297]: time="2025-07-10T00:34:58.265582794Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:34:58.265786 env[1297]: time="2025-07-10T00:34:58.265769534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265869 env[1297]: time="2025-07-10T00:34:58.265851698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.265944 env[1297]: time="2025-07-10T00:34:58.265926969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.266019 env[1297]: time="2025-07-10T00:34:58.266001479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:34:58.266100 env[1297]: time="2025-07-10T00:34:58.266078543Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:34:58.266171 env[1297]: time="2025-07-10T00:34:58.266152722Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:34:58.266257 env[1297]: time="2025-07-10T00:34:58.266239014Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:34:58.266354 env[1297]: time="2025-07-10T00:34:58.266336477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:34:58.266628 env[1297]: time="2025-07-10T00:34:58.266572099Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:34:58.267238 env[1297]: time="2025-07-10T00:34:58.266749562Z" level=info msg="Connect containerd service" Jul 10 00:34:58.267238 env[1297]: time="2025-07-10T00:34:58.267056006Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:34:58.267726 env[1297]: time="2025-07-10T00:34:58.267704883Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.267904868Z" level=info msg="Start subscribing containerd event" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.267975210Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.267980831Z" level=info msg="Start recovering state" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.268016408Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.268055561Z" level=info msg="Start event monitor" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.268080147Z" level=info msg="Start snapshots syncer" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.268090627Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.268097199Z" level=info msg="Start streaming server" Jul 10 00:34:58.268256 env[1297]: time="2025-07-10T00:34:58.268193560Z" level=info msg="containerd successfully booted in 0.074889s" Jul 10 00:34:58.268131 systemd[1]: Started containerd.service. Jul 10 00:34:58.560466 tar[1294]: linux-amd64/LICENSE Jul 10 00:34:58.560466 tar[1294]: linux-amd64/README.md Jul 10 00:34:58.564720 systemd[1]: Finished prepare-helm.service. Jul 10 00:34:58.636174 sshd_keygen[1302]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:34:58.653887 systemd[1]: Finished sshd-keygen.service. Jul 10 00:34:58.656718 systemd[1]: Starting issuegen.service... Jul 10 00:34:58.662203 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:34:58.662481 systemd[1]: Finished issuegen.service. Jul 10 00:34:58.664976 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:34:58.669768 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:34:58.671822 systemd[1]: Started getty@tty1.service. Jul 10 00:34:58.673586 systemd[1]: Started serial-getty@ttyS0.service. Jul 10 00:34:58.674697 systemd[1]: Reached target getty.target. Jul 10 00:34:59.123941 systemd-networkd[1081]: eth0: Gained IPv6LL Jul 10 00:34:59.125895 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:34:59.127389 systemd[1]: Reached target network-online.target. Jul 10 00:34:59.129780 systemd[1]: Starting kubelet.service... Jul 10 00:34:59.766711 systemd[1]: Started kubelet.service. Jul 10 00:34:59.768018 systemd[1]: Reached target multi-user.target. Jul 10 00:34:59.770165 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:34:59.775982 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:34:59.776215 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:34:59.779710 systemd[1]: Startup finished in 5.716s (kernel) + 5.950s (userspace) = 11.666s. Jul 10 00:35:00.192460 kubelet[1375]: E0710 00:35:00.192400 1375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:00.194058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:00.194203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:01.632422 systemd[1]: Created slice system-sshd.slice. Jul 10 00:35:01.633521 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:54728.service. Jul 10 00:35:01.668941 sshd[1385]: Accepted publickey for core from 10.0.0.1 port 54728 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:35:01.670183 sshd[1385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:01.678500 systemd-logind[1285]: New session 1 of user core. Jul 10 00:35:01.679403 systemd[1]: Created slice user-500.slice. Jul 10 00:35:01.680311 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:35:01.688396 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:35:01.689646 systemd[1]: Starting user@500.service... Jul 10 00:35:01.692449 (systemd)[1389]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:01.760313 systemd[1389]: Queued start job for default target default.target. Jul 10 00:35:01.760555 systemd[1389]: Reached target paths.target. Jul 10 00:35:01.760574 systemd[1389]: Reached target sockets.target. Jul 10 00:35:01.760588 systemd[1389]: Reached target timers.target. Jul 10 00:35:01.760602 systemd[1389]: Reached target basic.target. Jul 10 00:35:01.760643 systemd[1389]: Reached target default.target. Jul 10 00:35:01.760669 systemd[1389]: Startup finished in 63ms. Jul 10 00:35:01.760871 systemd[1]: Started user@500.service. Jul 10 00:35:01.762076 systemd[1]: Started session-1.scope. Jul 10 00:35:01.810722 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:54732.service. Jul 10 00:35:01.847804 sshd[1399]: Accepted publickey for core from 10.0.0.1 port 54732 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:35:01.849099 sshd[1399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:01.852648 systemd-logind[1285]: New session 2 of user core. Jul 10 00:35:01.853444 systemd[1]: Started session-2.scope. Jul 10 00:35:01.907033 sshd[1399]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:01.909721 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:54734.service. Jul 10 00:35:01.910270 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:54732.service: Deactivated successfully. Jul 10 00:35:01.911258 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:35:01.911284 systemd-logind[1285]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:35:01.912322 systemd-logind[1285]: Removed session 2. Jul 10 00:35:01.941813 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 54734 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:35:01.942665 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:01.945629 systemd-logind[1285]: New session 3 of user core. Jul 10 00:35:01.946275 systemd[1]: Started session-3.scope. Jul 10 00:35:01.994715 sshd[1404]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:01.996682 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:54746.service. Jul 10 00:35:01.997047 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:54734.service: Deactivated successfully. Jul 10 00:35:01.997735 systemd-logind[1285]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:35:01.997806 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:35:01.998525 systemd-logind[1285]: Removed session 3. Jul 10 00:35:02.029169 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 54746 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:35:02.030254 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:02.033067 systemd-logind[1285]: New session 4 of user core. Jul 10 00:35:02.033684 systemd[1]: Started session-4.scope. Jul 10 00:35:02.084507 sshd[1412]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:02.086676 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:54760.service. Jul 10 00:35:02.087117 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:54746.service: Deactivated successfully. Jul 10 00:35:02.087893 systemd-logind[1285]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:35:02.088020 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:35:02.089106 systemd-logind[1285]: Removed session 4. Jul 10 00:35:02.120180 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 54760 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:35:02.121189 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:02.124322 systemd-logind[1285]: New session 5 of user core. Jul 10 00:35:02.125078 systemd[1]: Started session-5.scope. Jul 10 00:35:02.179919 sudo[1424]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:35:02.180091 sudo[1424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:35:02.200460 systemd[1]: Starting docker.service... Jul 10 00:35:02.232786 env[1436]: time="2025-07-10T00:35:02.232710763Z" level=info msg="Starting up" Jul 10 00:35:02.234132 env[1436]: time="2025-07-10T00:35:02.234090191Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:35:02.234132 env[1436]: time="2025-07-10T00:35:02.234118711Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:35:02.234230 env[1436]: time="2025-07-10T00:35:02.234140378Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:35:02.234230 env[1436]: time="2025-07-10T00:35:02.234151140Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:35:02.236933 env[1436]: time="2025-07-10T00:35:02.235672290Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 00:35:02.236933 env[1436]: time="2025-07-10T00:35:02.235686289Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 00:35:02.236933 env[1436]: time="2025-07-10T00:35:02.235696722Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 00:35:02.236933 env[1436]: time="2025-07-10T00:35:02.235705100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 00:35:02.910384 env[1436]: time="2025-07-10T00:35:02.910337010Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 10 00:35:02.910384 env[1436]: time="2025-07-10T00:35:02.910363527Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 10 00:35:02.910626 env[1436]: time="2025-07-10T00:35:02.910478180Z" level=info msg="Loading containers: start." Jul 10 00:35:03.035791 kernel: Initializing XFRM netlink socket Jul 10 00:35:03.063278 env[1436]: time="2025-07-10T00:35:03.063233423Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 00:35:03.111147 systemd-networkd[1081]: docker0: Link UP Jul 10 00:35:03.126778 env[1436]: time="2025-07-10T00:35:03.126733827Z" level=info msg="Loading containers: done." Jul 10 00:35:03.137003 env[1436]: time="2025-07-10T00:35:03.136957845Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:35:03.137156 env[1436]: time="2025-07-10T00:35:03.137129647Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 00:35:03.137245 env[1436]: time="2025-07-10T00:35:03.137218934Z" level=info msg="Daemon has completed initialization" Jul 10 00:35:03.153065 systemd[1]: Started docker.service. Jul 10 00:35:03.160270 env[1436]: time="2025-07-10T00:35:03.160217867Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:35:03.809474 env[1297]: time="2025-07-10T00:35:03.809416224Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:35:04.481424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084470856.mount: Deactivated successfully. Jul 10 00:35:05.887340 env[1297]: time="2025-07-10T00:35:05.887272458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:05.889283 env[1297]: time="2025-07-10T00:35:05.889249449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:05.891193 env[1297]: time="2025-07-10T00:35:05.891164322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:05.893036 env[1297]: time="2025-07-10T00:35:05.892982262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:05.893726 env[1297]: time="2025-07-10T00:35:05.893698471Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:35:05.894391 env[1297]: time="2025-07-10T00:35:05.894366611Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:35:08.288712 env[1297]: time="2025-07-10T00:35:08.288645202Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:08.290960 env[1297]: time="2025-07-10T00:35:08.290934430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:08.292687 env[1297]: time="2025-07-10T00:35:08.292667812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:08.294388 env[1297]: time="2025-07-10T00:35:08.294362214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:08.295285 env[1297]: time="2025-07-10T00:35:08.295252353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:35:08.295840 env[1297]: time="2025-07-10T00:35:08.295817649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:35:10.108597 env[1297]: time="2025-07-10T00:35:10.108533083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.110284 env[1297]: time="2025-07-10T00:35:10.110236929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.112103 env[1297]: time="2025-07-10T00:35:10.112060198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.113771 env[1297]: time="2025-07-10T00:35:10.113723573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:10.115538 env[1297]: time="2025-07-10T00:35:10.115503425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:35:10.116082 env[1297]: time="2025-07-10T00:35:10.116051293Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:35:10.345200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:35:10.345416 systemd[1]: Stopped kubelet.service. Jul 10 00:35:10.444664 systemd[1]: Starting kubelet.service... Jul 10 00:35:10.812073 systemd[1]: Started kubelet.service. Jul 10 00:35:10.887855 kubelet[1575]: E0710 00:35:10.887777 1575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:10.890442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:10.890581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:12.181974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968588685.mount: Deactivated successfully. Jul 10 00:35:13.254316 env[1297]: time="2025-07-10T00:35:13.254254558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:13.256468 env[1297]: time="2025-07-10T00:35:13.256431807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:13.258413 env[1297]: time="2025-07-10T00:35:13.258345248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:13.261035 env[1297]: time="2025-07-10T00:35:13.260995151Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:13.261539 env[1297]: time="2025-07-10T00:35:13.261443837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:35:13.262312 env[1297]: time="2025-07-10T00:35:13.262215919Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:35:13.810217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687019292.mount: Deactivated successfully. Jul 10 00:35:15.463555 env[1297]: time="2025-07-10T00:35:15.463485985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.466391 env[1297]: time="2025-07-10T00:35:15.466355153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.468525 env[1297]: time="2025-07-10T00:35:15.468482310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.470253 env[1297]: time="2025-07-10T00:35:15.470221417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:15.471106 env[1297]: time="2025-07-10T00:35:15.471052385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:35:15.471601 env[1297]: time="2025-07-10T00:35:15.471577957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:35:16.353667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892363237.mount: Deactivated successfully. Jul 10 00:35:16.358545 env[1297]: time="2025-07-10T00:35:16.358502431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:16.360225 env[1297]: time="2025-07-10T00:35:16.360194564Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:16.361676 env[1297]: time="2025-07-10T00:35:16.361630952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:16.362971 env[1297]: time="2025-07-10T00:35:16.362917112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:16.363354 env[1297]: time="2025-07-10T00:35:16.363330319Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:35:16.363907 env[1297]: time="2025-07-10T00:35:16.363883182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:35:16.960998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971054617.mount: Deactivated successfully. Jul 10 00:35:20.728247 env[1297]: time="2025-07-10T00:35:20.728174079Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.730638 env[1297]: time="2025-07-10T00:35:20.730587442Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.732957 env[1297]: time="2025-07-10T00:35:20.732911711Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.734908 env[1297]: time="2025-07-10T00:35:20.734870397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:20.735738 env[1297]: time="2025-07-10T00:35:20.735696670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:35:21.095040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:35:21.095225 systemd[1]: Stopped kubelet.service. Jul 10 00:35:21.096502 systemd[1]: Starting kubelet.service... Jul 10 00:35:21.182252 systemd[1]: Started kubelet.service. Jul 10 00:35:21.227935 kubelet[1608]: E0710 00:35:21.227880 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:35:21.230082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:35:21.230243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:35:22.724214 systemd[1]: Stopped kubelet.service. Jul 10 00:35:22.725962 systemd[1]: Starting kubelet.service... Jul 10 00:35:22.748399 systemd[1]: Reloading. Jul 10 00:35:22.816117 /usr/lib/systemd/system-generators/torcx-generator[1648]: time="2025-07-10T00:35:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:22.816143 /usr/lib/systemd/system-generators/torcx-generator[1648]: time="2025-07-10T00:35:22Z" level=info msg="torcx already run" Jul 10 00:35:23.444301 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:23.444322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:23.465490 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:23.537883 systemd[1]: Started kubelet.service. Jul 10 00:35:23.539870 systemd[1]: Stopping kubelet.service... Jul 10 00:35:23.540152 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:35:23.540382 systemd[1]: Stopped kubelet.service. Jul 10 00:35:23.542082 systemd[1]: Starting kubelet.service... Jul 10 00:35:23.632849 systemd[1]: Started kubelet.service. Jul 10 00:35:23.672994 kubelet[1711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:23.672994 kubelet[1711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:35:23.672994 kubelet[1711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:23.673383 kubelet[1711]: I0710 00:35:23.673041 1711 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:35:24.077996 kubelet[1711]: I0710 00:35:24.077933 1711 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:35:24.077996 kubelet[1711]: I0710 00:35:24.077962 1711 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:35:24.078206 kubelet[1711]: I0710 00:35:24.078191 1711 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:35:24.097161 kubelet[1711]: E0710 00:35:24.097119 1711 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:24.098240 kubelet[1711]: I0710 00:35:24.098210 1711 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:24.103149 kubelet[1711]: E0710 00:35:24.103111 1711 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:35:24.103149 kubelet[1711]: I0710 00:35:24.103137 1711 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:35:24.107569 kubelet[1711]: I0710 00:35:24.107538 1711 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:35:24.108410 kubelet[1711]: I0710 00:35:24.108386 1711 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:35:24.108543 kubelet[1711]: I0710 00:35:24.108509 1711 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:35:24.108717 kubelet[1711]: I0710 00:35:24.108539 1711 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:35:24.108818 kubelet[1711]: I0710 00:35:24.108718 1711 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:35:24.108818 kubelet[1711]: I0710 00:35:24.108727 1711 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:35:24.108880 kubelet[1711]: I0710 00:35:24.108857 1711 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:24.116779 kubelet[1711]: I0710 00:35:24.116744 1711 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:35:24.116779 kubelet[1711]: I0710 00:35:24.116779 1711 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:35:24.116861 kubelet[1711]: I0710 00:35:24.116819 1711 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:35:24.116861 kubelet[1711]: I0710 00:35:24.116852 1711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:35:24.126857 kubelet[1711]: W0710 00:35:24.126748 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:24.127030 kubelet[1711]: E0710 00:35:24.126867 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:24.137383 kubelet[1711]: I0710 00:35:24.137354 1711 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:35:24.138074 kubelet[1711]: I0710 00:35:24.138046 1711 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:35:24.140520 kubelet[1711]: W0710 00:35:24.140469 1711 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:35:24.140798 kubelet[1711]: W0710 00:35:24.140463 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:24.140862 kubelet[1711]: E0710 00:35:24.140824 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:24.144714 kubelet[1711]: I0710 00:35:24.144687 1711 server.go:1274] "Started kubelet" Jul 10 00:35:24.144848 kubelet[1711]: I0710 00:35:24.144795 1711 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:35:24.145993 kubelet[1711]: I0710 00:35:24.145959 1711 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:35:24.147731 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:35:24.147874 kubelet[1711]: I0710 00:35:24.147851 1711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:35:24.153087 kubelet[1711]: I0710 00:35:24.153050 1711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:35:24.153155 kubelet[1711]: I0710 00:35:24.153086 1711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:35:24.153264 kubelet[1711]: I0710 00:35:24.153246 1711 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:35:24.154094 kubelet[1711]: I0710 00:35:24.153530 1711 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:35:24.154094 kubelet[1711]: I0710 00:35:24.153620 1711 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:35:24.154094 kubelet[1711]: I0710 00:35:24.153656 1711 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:35:24.154094 kubelet[1711]: W0710 00:35:24.154019 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:24.154094 kubelet[1711]: E0710 00:35:24.154057 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:24.154437 kubelet[1711]: I0710 00:35:24.154419 1711 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:35:24.154500 kubelet[1711]: I0710 00:35:24.154475 1711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:35:24.155242 kubelet[1711]: E0710 00:35:24.154725 1711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:24.155242 kubelet[1711]: E0710 00:35:24.154924 1711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Jul 10 00:35:24.159827 kubelet[1711]: I0710 00:35:24.156718 1711 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:35:24.161738 kubelet[1711]: E0710 00:35:24.160654 1711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bcb129dc1a22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:35:24.14464669 +0000 UTC m=+0.507525990,LastTimestamp:2025-07-10 00:35:24.14464669 +0000 UTC m=+0.507525990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:35:24.168220 kubelet[1711]: I0710 00:35:24.167655 1711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:35:24.169477 kubelet[1711]: I0710 00:35:24.169445 1711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:35:24.169477 kubelet[1711]: I0710 00:35:24.169473 1711 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:35:24.169552 kubelet[1711]: I0710 00:35:24.169500 1711 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:35:24.169577 kubelet[1711]: E0710 00:35:24.169548 1711 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:35:24.170787 kubelet[1711]: E0710 00:35:24.170748 1711 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:35:24.172419 kubelet[1711]: W0710 00:35:24.172396 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:24.172485 kubelet[1711]: E0710 00:35:24.172428 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:24.180743 kubelet[1711]: I0710 00:35:24.180711 1711 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:35:24.180743 kubelet[1711]: I0710 00:35:24.180738 1711 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:35:24.180830 kubelet[1711]: I0710 00:35:24.180780 1711 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:24.255275 kubelet[1711]: E0710 00:35:24.255239 1711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:24.270457 kubelet[1711]: E0710 00:35:24.270406 1711 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:35:24.355868 kubelet[1711]: E0710 00:35:24.355815 1711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:24.356260 kubelet[1711]: E0710 00:35:24.356217 1711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Jul 10 00:35:24.456820 kubelet[1711]: E0710 00:35:24.456771 1711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:24.470935 kubelet[1711]: E0710 00:35:24.470900 1711 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:35:24.557435 kubelet[1711]: E0710 00:35:24.557395 1711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:24.658103 kubelet[1711]: E0710 00:35:24.657987 1711 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:35:24.674398 kubelet[1711]: I0710 00:35:24.674343 1711 policy_none.go:49] "None policy: Start" Jul 10 00:35:24.675482 kubelet[1711]: I0710 00:35:24.675461 1711 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:35:24.675531 kubelet[1711]: I0710 00:35:24.675488 1711 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:35:24.682614 kubelet[1711]: I0710 00:35:24.682569 1711 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:35:24.682784 kubelet[1711]: I0710 00:35:24.682772 1711 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:35:24.682837 kubelet[1711]: I0710 00:35:24.682789 1711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:35:24.683411 kubelet[1711]: I0710 00:35:24.683397 1711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:35:24.683957 kubelet[1711]: E0710 00:35:24.683938 1711 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:35:24.757732 kubelet[1711]: E0710 00:35:24.757668 1711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Jul 10 00:35:24.772356 kubelet[1711]: E0710 00:35:24.772234 1711 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bcb129dc1a22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:35:24.14464669 +0000 UTC m=+0.507525990,LastTimestamp:2025-07-10 00:35:24.14464669 +0000 UTC m=+0.507525990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:35:24.784608 kubelet[1711]: I0710 00:35:24.784575 1711 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:24.785010 kubelet[1711]: E0710 00:35:24.784975 1711 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 10 00:35:24.958743 kubelet[1711]: I0710 00:35:24.958612 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:24.958743 kubelet[1711]: I0710 00:35:24.958656 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:24.958743 kubelet[1711]: I0710 00:35:24.958679 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:24.958743 kubelet[1711]: I0710 00:35:24.958708 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:24.959061 kubelet[1711]: I0710 00:35:24.958798 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:24.959061 kubelet[1711]: I0710 00:35:24.958845 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:24.959061 kubelet[1711]: I0710 00:35:24.958941 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:24.959061 kubelet[1711]: I0710 00:35:24.958977 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:24.959061 kubelet[1711]: I0710 00:35:24.958998 1711 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:24.986816 kubelet[1711]: I0710 00:35:24.986787 1711 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:24.987103 kubelet[1711]: E0710 00:35:24.987077 1711 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 10 00:35:25.040821 kubelet[1711]: W0710 00:35:25.040745 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:25.040889 kubelet[1711]: E0710 00:35:25.040832 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:25.176679 kubelet[1711]: E0710 00:35:25.176628 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:25.176861 kubelet[1711]: E0710 00:35:25.176690 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:25.177329 kubelet[1711]: E0710 00:35:25.177314 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:25.177486 env[1297]: time="2025-07-10T00:35:25.177440299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:25.177778 env[1297]: time="2025-07-10T00:35:25.177466489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:315cc6b48aca4767541b5b6412fd8271,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:25.177778 env[1297]: time="2025-07-10T00:35:25.177605899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:25.243625 kubelet[1711]: W0710 00:35:25.243462 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:25.243625 kubelet[1711]: E0710 00:35:25.243541 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:25.279898 kubelet[1711]: W0710 00:35:25.279813 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:25.279898 kubelet[1711]: E0710 00:35:25.279893 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:25.347624 kubelet[1711]: W0710 00:35:25.347539 1711 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Jul 10 00:35:25.347784 kubelet[1711]: E0710 00:35:25.347624 1711 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:25.388539 kubelet[1711]: I0710 00:35:25.388499 1711 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:25.388857 kubelet[1711]: E0710 00:35:25.388831 1711 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 10 00:35:25.559134 kubelet[1711]: E0710 00:35:25.558993 1711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Jul 10 00:35:25.747860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258891343.mount: Deactivated successfully. Jul 10 00:35:25.756058 env[1297]: time="2025-07-10T00:35:25.756000033Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.756992 env[1297]: time="2025-07-10T00:35:25.756971029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.762483 env[1297]: time="2025-07-10T00:35:25.762426466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.764261 env[1297]: time="2025-07-10T00:35:25.764207556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.765744 env[1297]: time="2025-07-10T00:35:25.765706131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.767034 env[1297]: time="2025-07-10T00:35:25.766998267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.768714 env[1297]: time="2025-07-10T00:35:25.768674013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.769860 env[1297]: time="2025-07-10T00:35:25.769834224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.772204 env[1297]: time="2025-07-10T00:35:25.772181417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.774334 env[1297]: time="2025-07-10T00:35:25.774310931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.775083 env[1297]: time="2025-07-10T00:35:25.774880331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.775472 env[1297]: time="2025-07-10T00:35:25.775437838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:25.827460 env[1297]: time="2025-07-10T00:35:25.827310801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:25.827460 env[1297]: time="2025-07-10T00:35:25.827347241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:25.827460 env[1297]: time="2025-07-10T00:35:25.827356349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:25.827644 env[1297]: time="2025-07-10T00:35:25.827470329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da3ce3e4f770ea2f641fd5e019348e4018801d1ffd710e904138b76101d2e9e4 pid=1753 runtime=io.containerd.runc.v2 Jul 10 00:35:25.833679 env[1297]: time="2025-07-10T00:35:25.833610329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:25.833679 env[1297]: time="2025-07-10T00:35:25.833646639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:25.833679 env[1297]: time="2025-07-10T00:35:25.833656057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:25.833983 env[1297]: time="2025-07-10T00:35:25.833930337Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a12d2469ad241de57f0f3c152f67595ad4142d4a1443ac77d379a1ff42dce4e pid=1761 runtime=io.containerd.runc.v2 Jul 10 00:35:25.839774 env[1297]: time="2025-07-10T00:35:25.839685984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:25.839820 env[1297]: time="2025-07-10T00:35:25.839767912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:25.839820 env[1297]: time="2025-07-10T00:35:25.839787740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:25.840729 env[1297]: time="2025-07-10T00:35:25.840012675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7405503aecae7fc5e59b3b4d253a22521db6bdd2ff2e641bbc370644450f9e15 pid=1785 runtime=io.containerd.runc.v2 Jul 10 00:35:26.001300 env[1297]: time="2025-07-10T00:35:26.001254777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"da3ce3e4f770ea2f641fd5e019348e4018801d1ffd710e904138b76101d2e9e4\"" Jul 10 00:35:26.003132 kubelet[1711]: E0710 00:35:26.002279 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.006020 env[1297]: time="2025-07-10T00:35:26.005995669Z" level=info msg="CreateContainer within sandbox \"da3ce3e4f770ea2f641fd5e019348e4018801d1ffd710e904138b76101d2e9e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:35:26.014859 env[1297]: time="2025-07-10T00:35:26.014809070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a12d2469ad241de57f0f3c152f67595ad4142d4a1443ac77d379a1ff42dce4e\"" Jul 10 00:35:26.015503 kubelet[1711]: E0710 00:35:26.015476 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.018005 env[1297]: time="2025-07-10T00:35:26.017968542Z" level=info msg="CreateContainer within sandbox \"6a12d2469ad241de57f0f3c152f67595ad4142d4a1443ac77d379a1ff42dce4e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:35:26.018005 env[1297]: time="2025-07-10T00:35:26.017973131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:315cc6b48aca4767541b5b6412fd8271,Namespace:kube-system,Attempt:0,} returns sandbox id \"7405503aecae7fc5e59b3b4d253a22521db6bdd2ff2e641bbc370644450f9e15\"" Jul 10 00:35:26.018748 kubelet[1711]: E0710 00:35:26.018613 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.020022 env[1297]: time="2025-07-10T00:35:26.019988848Z" level=info msg="CreateContainer within sandbox \"7405503aecae7fc5e59b3b4d253a22521db6bdd2ff2e641bbc370644450f9e15\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:35:26.032040 env[1297]: time="2025-07-10T00:35:26.031983924Z" level=info msg="CreateContainer within sandbox \"da3ce3e4f770ea2f641fd5e019348e4018801d1ffd710e904138b76101d2e9e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37597c50983b2ccd85dbf412b2ac67d066833659351d8cb5ed214e7eb98da6af\"" Jul 10 00:35:26.032606 env[1297]: time="2025-07-10T00:35:26.032577078Z" level=info msg="StartContainer for \"37597c50983b2ccd85dbf412b2ac67d066833659351d8cb5ed214e7eb98da6af\"" Jul 10 00:35:26.043829 env[1297]: time="2025-07-10T00:35:26.043787511Z" level=info msg="CreateContainer within sandbox \"7405503aecae7fc5e59b3b4d253a22521db6bdd2ff2e641bbc370644450f9e15\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f3ff5ed6302c71847b2dc69942c893d90e69f6c409d5803f76950e9d6ff481ea\"" Jul 10 00:35:26.044267 env[1297]: time="2025-07-10T00:35:26.044235615Z" level=info msg="StartContainer for \"f3ff5ed6302c71847b2dc69942c893d90e69f6c409d5803f76950e9d6ff481ea\"" Jul 10 00:35:26.049572 env[1297]: time="2025-07-10T00:35:26.049541915Z" level=info msg="CreateContainer within sandbox \"6a12d2469ad241de57f0f3c152f67595ad4142d4a1443ac77d379a1ff42dce4e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6b961d2a46429bb95f53bbe4005040cd85aa95249b4554576def781b8c930e98\"" Jul 10 00:35:26.051143 env[1297]: time="2025-07-10T00:35:26.051128044Z" level=info msg="StartContainer for \"6b961d2a46429bb95f53bbe4005040cd85aa95249b4554576def781b8c930e98\"" Jul 10 00:35:26.096643 env[1297]: time="2025-07-10T00:35:26.094236507Z" level=info msg="StartContainer for \"37597c50983b2ccd85dbf412b2ac67d066833659351d8cb5ed214e7eb98da6af\" returns successfully" Jul 10 00:35:26.120552 env[1297]: time="2025-07-10T00:35:26.120499993Z" level=info msg="StartContainer for \"f3ff5ed6302c71847b2dc69942c893d90e69f6c409d5803f76950e9d6ff481ea\" returns successfully" Jul 10 00:35:26.125588 env[1297]: time="2025-07-10T00:35:26.125533377Z" level=info msg="StartContainer for \"6b961d2a46429bb95f53bbe4005040cd85aa95249b4554576def781b8c930e98\" returns successfully" Jul 10 00:35:26.193966 kubelet[1711]: E0710 00:35:26.172674 1711 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.23:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:35:26.193966 kubelet[1711]: E0710 00:35:26.178033 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.193966 kubelet[1711]: E0710 00:35:26.180144 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.193966 kubelet[1711]: E0710 00:35:26.190061 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:26.193966 kubelet[1711]: I0710 00:35:26.190632 1711 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:26.193966 kubelet[1711]: E0710 00:35:26.190994 1711 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Jul 10 00:35:27.190224 kubelet[1711]: E0710 00:35:27.190188 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:27.685899 kubelet[1711]: E0710 00:35:27.685860 1711 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:35:27.793312 kubelet[1711]: I0710 00:35:27.793260 1711 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:27.850382 kubelet[1711]: I0710 00:35:27.850287 1711 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:35:28.128600 kubelet[1711]: I0710 00:35:28.128557 1711 apiserver.go:52] "Watching apiserver" Jul 10 00:35:28.154462 kubelet[1711]: I0710 00:35:28.154416 1711 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:35:28.196140 kubelet[1711]: E0710 00:35:28.196083 1711 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:28.196470 kubelet[1711]: E0710 00:35:28.196289 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:28.757055 kubelet[1711]: E0710 00:35:28.757025 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:29.192744 kubelet[1711]: E0710 00:35:29.192716 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:29.718820 systemd[1]: Reloading. Jul 10 00:35:29.782685 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-07-10T00:35:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:35:29.782713 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-07-10T00:35:29Z" level=info msg="torcx already run" Jul 10 00:35:29.851675 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:35:29.851691 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:35:29.872128 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:35:29.897933 kubelet[1711]: E0710 00:35:29.897898 1711 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:29.950696 kubelet[1711]: I0710 00:35:29.950656 1711 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:29.950730 systemd[1]: Stopping kubelet.service... Jul 10 00:35:29.975042 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:35:29.975283 systemd[1]: Stopped kubelet.service. Jul 10 00:35:29.976974 systemd[1]: Starting kubelet.service... Jul 10 00:35:30.072481 systemd[1]: Started kubelet.service. Jul 10 00:35:30.116053 kubelet[2059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:30.116053 kubelet[2059]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:35:30.116053 kubelet[2059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:35:30.116440 kubelet[2059]: I0710 00:35:30.116093 2059 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:35:30.121569 kubelet[2059]: I0710 00:35:30.121509 2059 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:35:30.121569 kubelet[2059]: I0710 00:35:30.121540 2059 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:35:30.121786 kubelet[2059]: I0710 00:35:30.121769 2059 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:35:30.126345 kubelet[2059]: I0710 00:35:30.126315 2059 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:35:30.128983 kubelet[2059]: I0710 00:35:30.128934 2059 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:35:30.132508 kubelet[2059]: E0710 00:35:30.132472 2059 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:35:30.132508 kubelet[2059]: I0710 00:35:30.132503 2059 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:35:30.136190 kubelet[2059]: I0710 00:35:30.136164 2059 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:35:30.136464 kubelet[2059]: I0710 00:35:30.136447 2059 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:35:30.136583 kubelet[2059]: I0710 00:35:30.136542 2059 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:35:30.136795 kubelet[2059]: I0710 00:35:30.136579 2059 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:35:30.136930 kubelet[2059]: I0710 00:35:30.136801 2059 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:35:30.136930 kubelet[2059]: I0710 00:35:30.136813 2059 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:35:30.136930 kubelet[2059]: I0710 00:35:30.136845 2059 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:30.137054 kubelet[2059]: I0710 00:35:30.136945 2059 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:35:30.137054 kubelet[2059]: I0710 00:35:30.136962 2059 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:35:30.137054 kubelet[2059]: I0710 00:35:30.137002 2059 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:35:30.137054 kubelet[2059]: I0710 00:35:30.137013 2059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:35:30.137958 kubelet[2059]: I0710 00:35:30.137935 2059 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:35:30.138300 kubelet[2059]: I0710 00:35:30.138282 2059 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:35:30.138695 kubelet[2059]: I0710 00:35:30.138653 2059 server.go:1274] "Started kubelet" Jul 10 00:35:30.139145 kubelet[2059]: I0710 00:35:30.139008 2059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:35:30.139712 kubelet[2059]: I0710 00:35:30.139692 2059 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:35:30.140955 kubelet[2059]: I0710 00:35:30.140896 2059 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:35:30.142321 kubelet[2059]: I0710 00:35:30.142291 2059 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:35:30.143038 kubelet[2059]: I0710 00:35:30.143026 2059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:35:30.151323 kubelet[2059]: I0710 00:35:30.151265 2059 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:35:30.151509 kubelet[2059]: I0710 00:35:30.151443 2059 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:35:30.151613 kubelet[2059]: I0710 00:35:30.151591 2059 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:35:30.152288 kubelet[2059]: I0710 00:35:30.151931 2059 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:35:30.153385 kubelet[2059]: I0710 00:35:30.153279 2059 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:35:30.153432 kubelet[2059]: I0710 00:35:30.153404 2059 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:35:30.155053 kubelet[2059]: E0710 00:35:30.155035 2059 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:35:30.155500 kubelet[2059]: I0710 00:35:30.155487 2059 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:35:30.158785 kubelet[2059]: I0710 00:35:30.158720 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:35:30.159477 kubelet[2059]: I0710 00:35:30.159453 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:35:30.159477 kubelet[2059]: I0710 00:35:30.159474 2059 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:35:30.159548 kubelet[2059]: I0710 00:35:30.159493 2059 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:35:30.159548 kubelet[2059]: E0710 00:35:30.159536 2059 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:35:30.193753 kubelet[2059]: I0710 00:35:30.193726 2059 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:35:30.193753 kubelet[2059]: I0710 00:35:30.193744 2059 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:35:30.193940 kubelet[2059]: I0710 00:35:30.193833 2059 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:35:30.194020 kubelet[2059]: I0710 00:35:30.193999 2059 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:35:30.194077 kubelet[2059]: I0710 00:35:30.194015 2059 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:35:30.194077 kubelet[2059]: I0710 00:35:30.194035 2059 policy_none.go:49] "None policy: Start" Jul 10 00:35:30.194618 kubelet[2059]: I0710 00:35:30.194591 2059 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:35:30.194618 kubelet[2059]: I0710 00:35:30.194617 2059 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:35:30.194817 kubelet[2059]: I0710 00:35:30.194803 2059 state_mem.go:75] "Updated machine memory state" Jul 10 00:35:30.195876 kubelet[2059]: I0710 00:35:30.195853 2059 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:35:30.196015 kubelet[2059]: I0710 00:35:30.196002 2059 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:35:30.196046 kubelet[2059]: I0710 00:35:30.196017 2059 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:35:30.196593 kubelet[2059]: I0710 00:35:30.196571 2059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:35:30.302307 kubelet[2059]: I0710 00:35:30.302202 2059 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:35:30.352408 kubelet[2059]: I0710 00:35:30.352348 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:30.352531 kubelet[2059]: I0710 00:35:30.352518 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:30.352592 kubelet[2059]: I0710 00:35:30.352562 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:30.352642 kubelet[2059]: I0710 00:35:30.352595 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:30.352642 kubelet[2059]: I0710 00:35:30.352615 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:30.352642 kubelet[2059]: I0710 00:35:30.352631 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:30.352742 kubelet[2059]: I0710 00:35:30.352699 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/315cc6b48aca4767541b5b6412fd8271-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"315cc6b48aca4767541b5b6412fd8271\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:35:30.352799 kubelet[2059]: I0710 00:35:30.352777 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:30.352831 kubelet[2059]: I0710 00:35:30.352808 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:30.362806 kubelet[2059]: E0710 00:35:30.362585 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:35:30.363004 kubelet[2059]: E0710 00:35:30.362829 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:35:30.364843 kubelet[2059]: I0710 00:35:30.364818 2059 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:35:30.364899 kubelet[2059]: I0710 00:35:30.364895 2059 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:35:30.612221 sudo[2094]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:35:30.612410 sudo[2094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 10 00:35:30.660815 kubelet[2059]: E0710 00:35:30.660776 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:30.663057 kubelet[2059]: E0710 00:35:30.663033 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:30.663232 kubelet[2059]: E0710 00:35:30.663208 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:31.137529 kubelet[2059]: I0710 00:35:31.137477 2059 apiserver.go:52] "Watching apiserver" Jul 10 00:35:31.152537 kubelet[2059]: I0710 00:35:31.152492 2059 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:35:31.172028 kubelet[2059]: E0710 00:35:31.171989 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:31.207658 kubelet[2059]: E0710 00:35:31.172127 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:31.207960 kubelet[2059]: E0710 00:35:31.172369 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:31.320569 sudo[2094]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:31.486643 kubelet[2059]: I0710 00:35:31.486469 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.486450779 podStartE2EDuration="1.486450779s" podCreationTimestamp="2025-07-10 00:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:31.486198967 +0000 UTC m=+1.407331357" watchObservedRunningTime="2025-07-10 00:35:31.486450779 +0000 UTC m=+1.407583149" Jul 10 00:35:31.487323 kubelet[2059]: I0710 00:35:31.487274 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.487265137 podStartE2EDuration="2.487265137s" podCreationTimestamp="2025-07-10 00:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:31.4780271 +0000 UTC m=+1.399159470" watchObservedRunningTime="2025-07-10 00:35:31.487265137 +0000 UTC m=+1.408397507" Jul 10 00:35:31.495818 kubelet[2059]: I0710 00:35:31.495714 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.495675099 podStartE2EDuration="3.495675099s" podCreationTimestamp="2025-07-10 00:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:31.495425772 +0000 UTC m=+1.416558143" watchObservedRunningTime="2025-07-10 00:35:31.495675099 +0000 UTC m=+1.416807469" Jul 10 00:35:32.173501 kubelet[2059]: E0710 00:35:32.173454 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:33.174386 kubelet[2059]: E0710 00:35:33.174351 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:33.586502 sudo[1424]: pam_unix(sudo:session): session closed for user root Jul 10 00:35:33.587924 sshd[1418]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:33.590312 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:54760.service: Deactivated successfully. Jul 10 00:35:33.591403 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:35:33.591929 systemd-logind[1285]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:35:33.592838 systemd-logind[1285]: Removed session 5. Jul 10 00:35:35.076176 kubelet[2059]: I0710 00:35:35.076101 2059 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:35:35.076676 env[1297]: time="2025-07-10T00:35:35.076473438Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:35:35.076968 kubelet[2059]: I0710 00:35:35.076682 2059 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:35:35.996005 kubelet[2059]: I0710 00:35:35.995959 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/343f442e-38a4-4037-8293-b582d64fb456-xtables-lock\") pod \"kube-proxy-nh7jx\" (UID: \"343f442e-38a4-4037-8293-b582d64fb456\") " pod="kube-system/kube-proxy-nh7jx" Jul 10 00:35:35.996005 kubelet[2059]: I0710 00:35:35.995997 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-xtables-lock\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996005 kubelet[2059]: I0710 00:35:35.996011 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-hostproc\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996233 kubelet[2059]: I0710 00:35:35.996026 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-cgroup\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996233 kubelet[2059]: I0710 00:35:35.996041 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-net\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996233 kubelet[2059]: I0710 00:35:35.996067 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-hubble-tls\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996233 kubelet[2059]: I0710 00:35:35.996080 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/343f442e-38a4-4037-8293-b582d64fb456-lib-modules\") pod \"kube-proxy-nh7jx\" (UID: \"343f442e-38a4-4037-8293-b582d64fb456\") " pod="kube-system/kube-proxy-nh7jx" Jul 10 00:35:35.996233 kubelet[2059]: I0710 00:35:35.996092 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-bpf-maps\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996233 kubelet[2059]: I0710 00:35:35.996110 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34fc034b-0d53-4785-9302-649b7383823f-cilium-config-path\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996505 kubelet[2059]: I0710 00:35:35.996121 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-run\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996505 kubelet[2059]: I0710 00:35:35.996132 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-etc-cni-netd\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996505 kubelet[2059]: I0710 00:35:35.996146 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-kernel\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996505 kubelet[2059]: I0710 00:35:35.996158 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-lib-modules\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996505 kubelet[2059]: I0710 00:35:35.996173 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34fc034b-0d53-4785-9302-649b7383823f-clustermesh-secrets\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996505 kubelet[2059]: I0710 00:35:35.996185 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-kube-api-access-tx4qh\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:35.996645 kubelet[2059]: I0710 00:35:35.996198 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/343f442e-38a4-4037-8293-b582d64fb456-kube-proxy\") pod \"kube-proxy-nh7jx\" (UID: \"343f442e-38a4-4037-8293-b582d64fb456\") " pod="kube-system/kube-proxy-nh7jx" Jul 10 00:35:35.996645 kubelet[2059]: I0710 00:35:35.996214 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hr62\" (UniqueName: \"kubernetes.io/projected/343f442e-38a4-4037-8293-b582d64fb456-kube-api-access-7hr62\") pod \"kube-proxy-nh7jx\" (UID: \"343f442e-38a4-4037-8293-b582d64fb456\") " pod="kube-system/kube-proxy-nh7jx" Jul 10 00:35:35.996645 kubelet[2059]: I0710 00:35:35.996227 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cni-path\") pod \"cilium-xt4d9\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " pod="kube-system/cilium-xt4d9" Jul 10 00:35:36.097496 kubelet[2059]: I0710 00:35:36.097453 2059 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:35:36.197836 kubelet[2059]: I0710 00:35:36.197745 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13d07cbe-f32d-46a1-b838-f23edab289eb-cilium-config-path\") pod \"cilium-operator-5d85765b45-gwpdp\" (UID: \"13d07cbe-f32d-46a1-b838-f23edab289eb\") " pod="kube-system/cilium-operator-5d85765b45-gwpdp" Jul 10 00:35:36.197836 kubelet[2059]: I0710 00:35:36.197821 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzrz\" (UniqueName: \"kubernetes.io/projected/13d07cbe-f32d-46a1-b838-f23edab289eb-kube-api-access-9qzrz\") pod \"cilium-operator-5d85765b45-gwpdp\" (UID: \"13d07cbe-f32d-46a1-b838-f23edab289eb\") " pod="kube-system/cilium-operator-5d85765b45-gwpdp" Jul 10 00:35:36.267556 kubelet[2059]: E0710 00:35:36.267431 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.268179 env[1297]: time="2025-07-10T00:35:36.268087341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nh7jx,Uid:343f442e-38a4-4037-8293-b582d64fb456,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:36.271894 kubelet[2059]: E0710 00:35:36.271863 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.272345 env[1297]: time="2025-07-10T00:35:36.272298554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xt4d9,Uid:34fc034b-0d53-4785-9302-649b7383823f,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:36.362718 kubelet[2059]: E0710 00:35:36.362667 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.363362 env[1297]: time="2025-07-10T00:35:36.363292924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gwpdp,Uid:13d07cbe-f32d-46a1-b838-f23edab289eb,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:36.373088 kubelet[2059]: E0710 00:35:36.372982 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.564032 env[1297]: time="2025-07-10T00:35:36.563641150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:36.564032 env[1297]: time="2025-07-10T00:35:36.563722435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:36.564032 env[1297]: time="2025-07-10T00:35:36.563750087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:36.564032 env[1297]: time="2025-07-10T00:35:36.563985044Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af pid=2164 runtime=io.containerd.runc.v2 Jul 10 00:35:36.564289 env[1297]: time="2025-07-10T00:35:36.564062522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:36.564289 env[1297]: time="2025-07-10T00:35:36.564105143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:36.564289 env[1297]: time="2025-07-10T00:35:36.564117537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:36.564412 env[1297]: time="2025-07-10T00:35:36.564369476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c pid=2162 runtime=io.containerd.runc.v2 Jul 10 00:35:36.567201 env[1297]: time="2025-07-10T00:35:36.566583830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:36.567201 env[1297]: time="2025-07-10T00:35:36.566684442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:36.567201 env[1297]: time="2025-07-10T00:35:36.566706683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:36.567201 env[1297]: time="2025-07-10T00:35:36.566866268Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0219da2c99ca614876281e2565fb50c238d9d7aaf207ac055a7bf4be308c9e96 pid=2179 runtime=io.containerd.runc.v2 Jul 10 00:35:36.611907 env[1297]: time="2025-07-10T00:35:36.611847064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xt4d9,Uid:34fc034b-0d53-4785-9302-649b7383823f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\"" Jul 10 00:35:36.613400 kubelet[2059]: E0710 00:35:36.613371 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.623317 env[1297]: time="2025-07-10T00:35:36.623245775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:35:36.624650 env[1297]: time="2025-07-10T00:35:36.624601987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gwpdp,Uid:13d07cbe-f32d-46a1-b838-f23edab289eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c\"" Jul 10 00:35:36.626231 kubelet[2059]: E0710 00:35:36.626208 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.627812 env[1297]: time="2025-07-10T00:35:36.627767189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nh7jx,Uid:343f442e-38a4-4037-8293-b582d64fb456,Namespace:kube-system,Attempt:0,} returns sandbox id \"0219da2c99ca614876281e2565fb50c238d9d7aaf207ac055a7bf4be308c9e96\"" Jul 10 00:35:36.628356 kubelet[2059]: E0710 00:35:36.628334 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:36.630266 env[1297]: time="2025-07-10T00:35:36.630231258Z" level=info msg="CreateContainer within sandbox \"0219da2c99ca614876281e2565fb50c238d9d7aaf207ac055a7bf4be308c9e96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:35:36.652117 env[1297]: time="2025-07-10T00:35:36.652040151Z" level=info msg="CreateContainer within sandbox \"0219da2c99ca614876281e2565fb50c238d9d7aaf207ac055a7bf4be308c9e96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02ecad07d17a7aa6e610724b0d5c297d471619d6306c90a8ebf1e1936ce3d474\"" Jul 10 00:35:36.652632 env[1297]: time="2025-07-10T00:35:36.652592421Z" level=info msg="StartContainer for \"02ecad07d17a7aa6e610724b0d5c297d471619d6306c90a8ebf1e1936ce3d474\"" Jul 10 00:35:36.699204 env[1297]: time="2025-07-10T00:35:36.699139369Z" level=info msg="StartContainer for \"02ecad07d17a7aa6e610724b0d5c297d471619d6306c90a8ebf1e1936ce3d474\" returns successfully" Jul 10 00:35:37.025829 kubelet[2059]: E0710 00:35:37.025793 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.183172 kubelet[2059]: E0710 00:35:37.182231 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.183172 kubelet[2059]: E0710 00:35:37.182948 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.183172 kubelet[2059]: E0710 00:35:37.182973 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:37.191548 kubelet[2059]: I0710 00:35:37.191130 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nh7jx" podStartSLOduration=2.191105335 podStartE2EDuration="2.191105335s" podCreationTimestamp="2025-07-10 00:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:37.19035036 +0000 UTC m=+7.111482740" watchObservedRunningTime="2025-07-10 00:35:37.191105335 +0000 UTC m=+7.112237705" Jul 10 00:35:41.702279 kubelet[2059]: E0710 00:35:41.702242 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:42.195291 kubelet[2059]: E0710 00:35:42.195253 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:42.695910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909137211.mount: Deactivated successfully. Jul 10 00:35:43.112949 update_engine[1288]: I0710 00:35:43.112906 1288 update_attempter.cc:509] Updating boot flags... Jul 10 00:35:46.895392 env[1297]: time="2025-07-10T00:35:46.895318717Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:46.896293 env[1297]: time="2025-07-10T00:35:46.896266438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:46.898133 env[1297]: time="2025-07-10T00:35:46.898086488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:46.898637 env[1297]: time="2025-07-10T00:35:46.898610468Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:35:46.899969 env[1297]: time="2025-07-10T00:35:46.899942737Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:35:46.901129 env[1297]: time="2025-07-10T00:35:46.901092789Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:35:46.915775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356510410.mount: Deactivated successfully. Jul 10 00:35:46.916383 env[1297]: time="2025-07-10T00:35:46.916341166Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\"" Jul 10 00:35:46.917291 env[1297]: time="2025-07-10T00:35:46.917233663Z" level=info msg="StartContainer for \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\"" Jul 10 00:35:47.375693 env[1297]: time="2025-07-10T00:35:47.375638566Z" level=info msg="StartContainer for \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\" returns successfully" Jul 10 00:35:47.392670 env[1297]: time="2025-07-10T00:35:47.392620433Z" level=info msg="shim disconnected" id=b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47 Jul 10 00:35:47.392670 env[1297]: time="2025-07-10T00:35:47.392672802Z" level=warning msg="cleaning up after shim disconnected" id=b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47 namespace=k8s.io Jul 10 00:35:47.392932 env[1297]: time="2025-07-10T00:35:47.392681970Z" level=info msg="cleaning up dead shim" Jul 10 00:35:47.400135 env[1297]: time="2025-07-10T00:35:47.400001255Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2494 runtime=io.containerd.runc.v2\n" Jul 10 00:35:47.912031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47-rootfs.mount: Deactivated successfully. Jul 10 00:35:48.241137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945560927.mount: Deactivated successfully. Jul 10 00:35:48.382185 kubelet[2059]: E0710 00:35:48.382032 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:48.387658 env[1297]: time="2025-07-10T00:35:48.387255464Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:35:48.458562 env[1297]: time="2025-07-10T00:35:48.458512624Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\"" Jul 10 00:35:48.460298 env[1297]: time="2025-07-10T00:35:48.459068904Z" level=info msg="StartContainer for \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\"" Jul 10 00:35:48.514301 env[1297]: time="2025-07-10T00:35:48.513940056Z" level=info msg="StartContainer for \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\" returns successfully" Jul 10 00:35:48.519050 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:35:48.519292 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:35:48.519690 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:35:48.521286 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:35:48.528572 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:35:48.553634 env[1297]: time="2025-07-10T00:35:48.553581632Z" level=info msg="shim disconnected" id=f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942 Jul 10 00:35:48.553634 env[1297]: time="2025-07-10T00:35:48.553632749Z" level=warning msg="cleaning up after shim disconnected" id=f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942 namespace=k8s.io Jul 10 00:35:48.554010 env[1297]: time="2025-07-10T00:35:48.553647377Z" level=info msg="cleaning up dead shim" Jul 10 00:35:48.560293 env[1297]: time="2025-07-10T00:35:48.560226258Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2559 runtime=io.containerd.runc.v2\n" Jul 10 00:35:49.017856 env[1297]: time="2025-07-10T00:35:49.017789803Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:49.019496 env[1297]: time="2025-07-10T00:35:49.019460707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:49.021096 env[1297]: time="2025-07-10T00:35:49.021054565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:35:49.021578 env[1297]: time="2025-07-10T00:35:49.021550761Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:35:49.023786 env[1297]: time="2025-07-10T00:35:49.023740484Z" level=info msg="CreateContainer within sandbox \"bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:35:49.035147 env[1297]: time="2025-07-10T00:35:49.035095775Z" level=info msg="CreateContainer within sandbox \"bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\"" Jul 10 00:35:49.035542 env[1297]: time="2025-07-10T00:35:49.035514405Z" level=info msg="StartContainer for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\"" Jul 10 00:35:49.075393 env[1297]: time="2025-07-10T00:35:49.075206685Z" level=info msg="StartContainer for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" returns successfully" Jul 10 00:35:49.389501 kubelet[2059]: E0710 00:35:49.389442 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.391818 kubelet[2059]: E0710 00:35:49.391596 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:49.392414 env[1297]: time="2025-07-10T00:35:49.392363100Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:35:49.419031 env[1297]: time="2025-07-10T00:35:49.418570272Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\"" Jul 10 00:35:49.421742 env[1297]: time="2025-07-10T00:35:49.419929877Z" level=info msg="StartContainer for \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\"" Jul 10 00:35:49.421825 kubelet[2059]: I0710 00:35:49.421446 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gwpdp" podStartSLOduration=1.025648207 podStartE2EDuration="13.421429538s" podCreationTimestamp="2025-07-10 00:35:36 +0000 UTC" firstStartedPulling="2025-07-10 00:35:36.62661165 +0000 UTC m=+6.547744010" lastFinishedPulling="2025-07-10 00:35:49.022392971 +0000 UTC m=+18.943525341" observedRunningTime="2025-07-10 00:35:49.421143497 +0000 UTC m=+19.342275867" watchObservedRunningTime="2025-07-10 00:35:49.421429538 +0000 UTC m=+19.342561908" Jul 10 00:35:49.665793 env[1297]: time="2025-07-10T00:35:49.665614956Z" level=info msg="StartContainer for \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\" returns successfully" Jul 10 00:35:49.715678 env[1297]: time="2025-07-10T00:35:49.715613969Z" level=info msg="shim disconnected" id=e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c Jul 10 00:35:49.715678 env[1297]: time="2025-07-10T00:35:49.715667049Z" level=warning msg="cleaning up after shim disconnected" id=e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c namespace=k8s.io Jul 10 00:35:49.715678 env[1297]: time="2025-07-10T00:35:49.715675625Z" level=info msg="cleaning up dead shim" Jul 10 00:35:49.731021 env[1297]: time="2025-07-10T00:35:49.730969248Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2653 runtime=io.containerd.runc.v2\n" Jul 10 00:35:50.395368 kubelet[2059]: E0710 00:35:50.395335 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:50.395368 kubelet[2059]: E0710 00:35:50.395370 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:50.396900 env[1297]: time="2025-07-10T00:35:50.396830798Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:35:50.719980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456093859.mount: Deactivated successfully. Jul 10 00:35:50.720652 env[1297]: time="2025-07-10T00:35:50.720597056Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\"" Jul 10 00:35:50.721146 env[1297]: time="2025-07-10T00:35:50.721122518Z" level=info msg="StartContainer for \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\"" Jul 10 00:35:50.765958 env[1297]: time="2025-07-10T00:35:50.765902297Z" level=info msg="StartContainer for \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\" returns successfully" Jul 10 00:35:50.783803 env[1297]: time="2025-07-10T00:35:50.783743365Z" level=info msg="shim disconnected" id=3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e Jul 10 00:35:50.783803 env[1297]: time="2025-07-10T00:35:50.783802407Z" level=warning msg="cleaning up after shim disconnected" id=3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e namespace=k8s.io Jul 10 00:35:50.784023 env[1297]: time="2025-07-10T00:35:50.783811364Z" level=info msg="cleaning up dead shim" Jul 10 00:35:50.789854 env[1297]: time="2025-07-10T00:35:50.789808207Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:35:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2706 runtime=io.containerd.runc.v2\n" Jul 10 00:35:50.912497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e-rootfs.mount: Deactivated successfully. Jul 10 00:35:51.400724 kubelet[2059]: E0710 00:35:51.400685 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:51.402551 env[1297]: time="2025-07-10T00:35:51.402502581Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:35:51.419716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812321156.mount: Deactivated successfully. Jul 10 00:35:51.423211 env[1297]: time="2025-07-10T00:35:51.423166299Z" level=info msg="CreateContainer within sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\"" Jul 10 00:35:51.423676 env[1297]: time="2025-07-10T00:35:51.423656754Z" level=info msg="StartContainer for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\"" Jul 10 00:35:51.471359 env[1297]: time="2025-07-10T00:35:51.471318527Z" level=info msg="StartContainer for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" returns successfully" Jul 10 00:35:51.540369 kubelet[2059]: I0710 00:35:51.540340 2059 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:35:51.704822 kubelet[2059]: I0710 00:35:51.704670 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vxpb\" (UniqueName: \"kubernetes.io/projected/c2ca6a74-dc92-4ae6-8cd4-901b052f3cbd-kube-api-access-2vxpb\") pod \"coredns-7c65d6cfc9-pw4x8\" (UID: \"c2ca6a74-dc92-4ae6-8cd4-901b052f3cbd\") " pod="kube-system/coredns-7c65d6cfc9-pw4x8" Jul 10 00:35:51.705087 kubelet[2059]: I0710 00:35:51.705062 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57brm\" (UniqueName: \"kubernetes.io/projected/8d8bd128-ae6b-4a77-a431-77a7cb2fd5a6-kube-api-access-57brm\") pod \"coredns-7c65d6cfc9-g85t6\" (UID: \"8d8bd128-ae6b-4a77-a431-77a7cb2fd5a6\") " pod="kube-system/coredns-7c65d6cfc9-g85t6" Jul 10 00:35:51.705226 kubelet[2059]: I0710 00:35:51.705204 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2ca6a74-dc92-4ae6-8cd4-901b052f3cbd-config-volume\") pod \"coredns-7c65d6cfc9-pw4x8\" (UID: \"c2ca6a74-dc92-4ae6-8cd4-901b052f3cbd\") " pod="kube-system/coredns-7c65d6cfc9-pw4x8" Jul 10 00:35:51.705352 kubelet[2059]: I0710 00:35:51.705336 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d8bd128-ae6b-4a77-a431-77a7cb2fd5a6-config-volume\") pod \"coredns-7c65d6cfc9-g85t6\" (UID: \"8d8bd128-ae6b-4a77-a431-77a7cb2fd5a6\") " pod="kube-system/coredns-7c65d6cfc9-g85t6" Jul 10 00:35:51.871950 kubelet[2059]: E0710 00:35:51.871892 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:51.872564 env[1297]: time="2025-07-10T00:35:51.872533707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pw4x8,Uid:c2ca6a74-dc92-4ae6-8cd4-901b052f3cbd,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:51.875059 kubelet[2059]: E0710 00:35:51.875022 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:51.875735 env[1297]: time="2025-07-10T00:35:51.875522633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g85t6,Uid:8d8bd128-ae6b-4a77-a431-77a7cb2fd5a6,Namespace:kube-system,Attempt:0,}" Jul 10 00:35:52.405479 kubelet[2059]: E0710 00:35:52.405447 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:52.421565 kubelet[2059]: I0710 00:35:52.421487 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xt4d9" podStartSLOduration=7.136233076 podStartE2EDuration="17.421465802s" podCreationTimestamp="2025-07-10 00:35:35 +0000 UTC" firstStartedPulling="2025-07-10 00:35:36.614479354 +0000 UTC m=+6.535611724" lastFinishedPulling="2025-07-10 00:35:46.89971208 +0000 UTC m=+16.820844450" observedRunningTime="2025-07-10 00:35:52.420805849 +0000 UTC m=+22.341938229" watchObservedRunningTime="2025-07-10 00:35:52.421465802 +0000 UTC m=+22.342598172" Jul 10 00:35:53.406598 kubelet[2059]: E0710 00:35:53.406562 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:53.826271 systemd-networkd[1081]: cilium_host: Link UP Jul 10 00:35:53.826994 systemd-networkd[1081]: cilium_net: Link UP Jul 10 00:35:53.851614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 10 00:35:53.851745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:35:53.851741 systemd-networkd[1081]: cilium_net: Gained carrier Jul 10 00:35:53.853274 systemd-networkd[1081]: cilium_host: Gained carrier Jul 10 00:35:53.854091 systemd-networkd[1081]: cilium_net: Gained IPv6LL Jul 10 00:35:53.854258 systemd-networkd[1081]: cilium_host: Gained IPv6LL Jul 10 00:35:53.922640 systemd-networkd[1081]: cilium_vxlan: Link UP Jul 10 00:35:53.922651 systemd-networkd[1081]: cilium_vxlan: Gained carrier Jul 10 00:35:54.119793 kernel: NET: Registered PF_ALG protocol family Jul 10 00:35:54.408548 kubelet[2059]: E0710 00:35:54.408449 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:54.626575 systemd-networkd[1081]: lxc_health: Link UP Jul 10 00:35:54.634783 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:35:54.634845 systemd-networkd[1081]: lxc_health: Gained carrier Jul 10 00:35:54.859833 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:37470.service. Jul 10 00:35:54.895094 sshd[3235]: Accepted publickey for core from 10.0.0.1 port 37470 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:35:54.896631 sshd[3235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:35:54.900436 systemd-logind[1285]: New session 6 of user core. Jul 10 00:35:54.901401 systemd[1]: Started session-6.scope. Jul 10 00:35:54.911877 systemd-networkd[1081]: lxcb8b0792e1a37: Link UP Jul 10 00:35:54.929246 systemd-networkd[1081]: lxcc3650365671d: Link UP Jul 10 00:35:54.929847 kernel: eth0: renamed from tmpe1750 Jul 10 00:35:54.937558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:35:54.937636 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb8b0792e1a37: link becomes ready Jul 10 00:35:54.939589 systemd-networkd[1081]: lxcb8b0792e1a37: Gained carrier Jul 10 00:35:54.941862 kernel: eth0: renamed from tmp6446f Jul 10 00:35:54.947503 systemd-networkd[1081]: lxcc3650365671d: Gained carrier Jul 10 00:35:54.948065 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc3650365671d: link becomes ready Jul 10 00:35:55.075969 sshd[3235]: pam_unix(sshd:session): session closed for user core Jul 10 00:35:55.079403 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:37470.service: Deactivated successfully. Jul 10 00:35:55.080153 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:35:55.080503 systemd-logind[1285]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:35:55.081336 systemd-logind[1285]: Removed session 6. Jul 10 00:35:55.251902 systemd-networkd[1081]: cilium_vxlan: Gained IPv6LL Jul 10 00:35:56.083963 systemd-networkd[1081]: lxc_health: Gained IPv6LL Jul 10 00:35:56.278461 kubelet[2059]: E0710 00:35:56.278427 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.339916 systemd-networkd[1081]: lxcc3650365671d: Gained IPv6LL Jul 10 00:35:56.413554 kubelet[2059]: E0710 00:35:56.413508 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:56.915990 systemd-networkd[1081]: lxcb8b0792e1a37: Gained IPv6LL Jul 10 00:35:58.558379 env[1297]: time="2025-07-10T00:35:58.558321799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:58.558810 env[1297]: time="2025-07-10T00:35:58.558362255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:58.558810 env[1297]: time="2025-07-10T00:35:58.558371853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:58.559070 env[1297]: time="2025-07-10T00:35:58.558968436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6446f678b47f519a50a36f06ba583153e84bfb471c4a109f19ea5ca073a4b8a2 pid=3293 runtime=io.containerd.runc.v2 Jul 10 00:35:58.578237 systemd-resolved[1219]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:35:58.597557 env[1297]: time="2025-07-10T00:35:58.597505263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g85t6,Uid:8d8bd128-ae6b-4a77-a431-77a7cb2fd5a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6446f678b47f519a50a36f06ba583153e84bfb471c4a109f19ea5ca073a4b8a2\"" Jul 10 00:35:58.598155 kubelet[2059]: E0710 00:35:58.598123 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:58.600203 env[1297]: time="2025-07-10T00:35:58.600168225Z" level=info msg="CreateContainer within sandbox \"6446f678b47f519a50a36f06ba583153e84bfb471c4a109f19ea5ca073a4b8a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:35:58.642699 env[1297]: time="2025-07-10T00:35:58.642642483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:35:58.642699 env[1297]: time="2025-07-10T00:35:58.642680726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:35:58.642699 env[1297]: time="2025-07-10T00:35:58.642693289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:35:58.642938 env[1297]: time="2025-07-10T00:35:58.642837300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e175018d446b15cd3fd2dc9746db4877eb0a6b397f0638905cdbd9ce232e36b1 pid=3333 runtime=io.containerd.runc.v2 Jul 10 00:35:58.663586 systemd-resolved[1219]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:35:58.686544 env[1297]: time="2025-07-10T00:35:58.686500706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pw4x8,Uid:c2ca6a74-dc92-4ae6-8cd4-901b052f3cbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e175018d446b15cd3fd2dc9746db4877eb0a6b397f0638905cdbd9ce232e36b1\"" Jul 10 00:35:58.687159 kubelet[2059]: E0710 00:35:58.687140 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:58.688419 env[1297]: time="2025-07-10T00:35:58.688395211Z" level=info msg="CreateContainer within sandbox \"e175018d446b15cd3fd2dc9746db4877eb0a6b397f0638905cdbd9ce232e36b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:35:58.762656 env[1297]: time="2025-07-10T00:35:58.762581652Z" level=info msg="CreateContainer within sandbox \"6446f678b47f519a50a36f06ba583153e84bfb471c4a109f19ea5ca073a4b8a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b63805b5b48ccfcc36c951fe9f65aa4f5106ae780177f81ca2e3d13979442ec\"" Jul 10 00:35:58.763204 env[1297]: time="2025-07-10T00:35:58.763126648Z" level=info msg="StartContainer for \"9b63805b5b48ccfcc36c951fe9f65aa4f5106ae780177f81ca2e3d13979442ec\"" Jul 10 00:35:58.768479 env[1297]: time="2025-07-10T00:35:58.768416614Z" level=info msg="CreateContainer within sandbox \"e175018d446b15cd3fd2dc9746db4877eb0a6b397f0638905cdbd9ce232e36b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33a39d7673cc43e2ba7611e89c800c9dca2c0f8ab9040ed2c49fdf6b9e3cb36a\"" Jul 10 00:35:58.770599 env[1297]: time="2025-07-10T00:35:58.769612645Z" level=info msg="StartContainer for \"33a39d7673cc43e2ba7611e89c800c9dca2c0f8ab9040ed2c49fdf6b9e3cb36a\"" Jul 10 00:35:58.813582 env[1297]: time="2025-07-10T00:35:58.812786930Z" level=info msg="StartContainer for \"9b63805b5b48ccfcc36c951fe9f65aa4f5106ae780177f81ca2e3d13979442ec\" returns successfully" Jul 10 00:35:58.814620 env[1297]: time="2025-07-10T00:35:58.814569255Z" level=info msg="StartContainer for \"33a39d7673cc43e2ba7611e89c800c9dca2c0f8ab9040ed2c49fdf6b9e3cb36a\" returns successfully" Jul 10 00:35:59.422352 kubelet[2059]: E0710 00:35:59.422118 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:59.422352 kubelet[2059]: E0710 00:35:59.422282 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:35:59.591532 kubelet[2059]: I0710 00:35:59.591459 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pw4x8" podStartSLOduration=23.591436639 podStartE2EDuration="23.591436639s" podCreationTimestamp="2025-07-10 00:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:59.579239017 +0000 UTC m=+29.500371388" watchObservedRunningTime="2025-07-10 00:35:59.591436639 +0000 UTC m=+29.512569009" Jul 10 00:35:59.601923 kubelet[2059]: I0710 00:35:59.601873 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g85t6" podStartSLOduration=23.601854862 podStartE2EDuration="23.601854862s" podCreationTimestamp="2025-07-10 00:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:35:59.591390922 +0000 UTC m=+29.512523293" watchObservedRunningTime="2025-07-10 00:35:59.601854862 +0000 UTC m=+29.522987232" Jul 10 00:36:00.078865 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:33878.service. Jul 10 00:36:00.113911 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 33878 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:00.115138 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:00.118895 systemd-logind[1285]: New session 7 of user core. Jul 10 00:36:00.119898 systemd[1]: Started session-7.scope. Jul 10 00:36:00.250712 sshd[3447]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:00.253353 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:33878.service: Deactivated successfully. Jul 10 00:36:00.254455 systemd-logind[1285]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:36:00.254513 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:36:00.255205 systemd-logind[1285]: Removed session 7. Jul 10 00:36:00.424341 kubelet[2059]: E0710 00:36:00.424297 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:01.426299 kubelet[2059]: E0710 00:36:01.426268 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:01.876090 kubelet[2059]: E0710 00:36:01.876037 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:02.428283 kubelet[2059]: E0710 00:36:02.428250 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:05.254641 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:33892.service. Jul 10 00:36:05.288300 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 33892 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:05.289418 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:05.293217 systemd-logind[1285]: New session 8 of user core. Jul 10 00:36:05.294136 systemd[1]: Started session-8.scope. Jul 10 00:36:05.410423 sshd[3466]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:05.412610 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:33892.service: Deactivated successfully. Jul 10 00:36:05.413373 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:36:05.414138 systemd-logind[1285]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:36:05.414971 systemd-logind[1285]: Removed session 8. Jul 10 00:36:10.413391 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:59022.service. Jul 10 00:36:10.448205 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 59022 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:10.449384 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:10.452989 systemd-logind[1285]: New session 9 of user core. Jul 10 00:36:10.453882 systemd[1]: Started session-9.scope. Jul 10 00:36:10.660751 sshd[3483]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:10.662933 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:59022.service: Deactivated successfully. Jul 10 00:36:10.664089 systemd-logind[1285]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:36:10.664143 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:36:10.665123 systemd-logind[1285]: Removed session 9. Jul 10 00:36:15.663662 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:59028.service. Jul 10 00:36:15.699326 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 59028 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:15.700373 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:15.704535 systemd-logind[1285]: New session 10 of user core. Jul 10 00:36:15.705596 systemd[1]: Started session-10.scope. Jul 10 00:36:15.827234 sshd[3518]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:15.829719 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:59040.service. Jul 10 00:36:15.831584 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:59028.service: Deactivated successfully. Jul 10 00:36:15.835110 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:36:15.835435 systemd-logind[1285]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:36:15.836089 systemd-logind[1285]: Removed session 10. Jul 10 00:36:15.864469 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 59040 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:15.865463 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:15.868687 systemd-logind[1285]: New session 11 of user core. Jul 10 00:36:15.869466 systemd[1]: Started session-11.scope. Jul 10 00:36:16.012163 sshd[3531]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:16.015124 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:59050.service. Jul 10 00:36:16.015523 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:59040.service: Deactivated successfully. Jul 10 00:36:16.017622 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:36:16.017788 systemd-logind[1285]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:36:16.018711 systemd-logind[1285]: Removed session 11. Jul 10 00:36:16.052627 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 59050 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:16.053823 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:16.057731 systemd-logind[1285]: New session 12 of user core. Jul 10 00:36:16.058721 systemd[1]: Started session-12.scope. Jul 10 00:36:16.162385 sshd[3544]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:16.164541 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:59050.service: Deactivated successfully. Jul 10 00:36:16.165404 systemd-logind[1285]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:36:16.165434 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:36:16.166191 systemd-logind[1285]: Removed session 12. Jul 10 00:36:21.166259 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:54270.service. Jul 10 00:36:21.199404 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 54270 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:21.200421 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:21.203815 systemd-logind[1285]: New session 13 of user core. Jul 10 00:36:21.204627 systemd[1]: Started session-13.scope. Jul 10 00:36:21.377371 sshd[3559]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:21.379377 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:54270.service: Deactivated successfully. Jul 10 00:36:21.380345 systemd-logind[1285]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:36:21.380369 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:36:21.381150 systemd-logind[1285]: Removed session 13. Jul 10 00:36:26.381642 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:54276.service. Jul 10 00:36:26.419608 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 54276 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:26.420667 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:26.424048 systemd-logind[1285]: New session 14 of user core. Jul 10 00:36:26.424993 systemd[1]: Started session-14.scope. Jul 10 00:36:26.530670 sshd[3574]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:26.532889 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:54276.service: Deactivated successfully. Jul 10 00:36:26.533853 systemd-logind[1285]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:36:26.533876 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:36:26.534570 systemd-logind[1285]: Removed session 14. Jul 10 00:36:31.534226 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:51664.service. Jul 10 00:36:31.571137 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 51664 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:31.572908 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:31.576888 systemd-logind[1285]: New session 15 of user core. Jul 10 00:36:31.577637 systemd[1]: Started session-15.scope. Jul 10 00:36:31.692741 sshd[3590]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:31.696280 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:51672.service. Jul 10 00:36:31.696887 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:51664.service: Deactivated successfully. Jul 10 00:36:31.697933 systemd-logind[1285]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:36:31.697989 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:36:31.698917 systemd-logind[1285]: Removed session 15. Jul 10 00:36:31.731786 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 51672 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:31.733019 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:31.736696 systemd-logind[1285]: New session 16 of user core. Jul 10 00:36:31.737442 systemd[1]: Started session-16.scope. Jul 10 00:36:32.126434 sshd[3603]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:32.129268 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:51688.service. Jul 10 00:36:32.129975 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:51672.service: Deactivated successfully. Jul 10 00:36:32.131648 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:36:32.131999 systemd-logind[1285]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:36:32.132881 systemd-logind[1285]: Removed session 16. Jul 10 00:36:32.169250 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 51688 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:32.170548 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:32.174676 systemd-logind[1285]: New session 17 of user core. Jul 10 00:36:32.175436 systemd[1]: Started session-17.scope. Jul 10 00:36:33.616602 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:51690.service. Jul 10 00:36:33.618253 sshd[3614]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:33.620656 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:51688.service: Deactivated successfully. Jul 10 00:36:33.622079 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:36:33.622576 systemd-logind[1285]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:36:33.623828 systemd-logind[1285]: Removed session 17. Jul 10 00:36:33.658459 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 51690 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:33.659974 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:33.663490 systemd-logind[1285]: New session 18 of user core. Jul 10 00:36:33.664430 systemd[1]: Started session-18.scope. Jul 10 00:36:33.882953 sshd[3631]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:33.885276 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:51704.service. Jul 10 00:36:33.885667 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:51690.service: Deactivated successfully. Jul 10 00:36:33.887443 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:36:33.887813 systemd-logind[1285]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:36:33.888602 systemd-logind[1285]: Removed session 18. Jul 10 00:36:33.918149 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 51704 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:33.919413 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:33.922969 systemd-logind[1285]: New session 19 of user core. Jul 10 00:36:33.923813 systemd[1]: Started session-19.scope. Jul 10 00:36:34.033295 sshd[3646]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:34.036113 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:51704.service: Deactivated successfully. Jul 10 00:36:34.037347 systemd-logind[1285]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:36:34.037410 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:36:34.038471 systemd-logind[1285]: Removed session 19. Jul 10 00:36:39.036128 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:51710.service. Jul 10 00:36:39.071324 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 51710 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:39.072526 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:39.076389 systemd-logind[1285]: New session 20 of user core. Jul 10 00:36:39.077478 systemd[1]: Started session-20.scope. Jul 10 00:36:39.191997 sshd[3664]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:39.194099 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:51710.service: Deactivated successfully. Jul 10 00:36:39.195105 systemd-logind[1285]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:36:39.195137 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:36:39.195830 systemd-logind[1285]: Removed session 20. Jul 10 00:36:41.160832 kubelet[2059]: E0710 00:36:41.160751 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:44.195215 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:33702.service. Jul 10 00:36:44.227852 sshd[3682]: Accepted publickey for core from 10.0.0.1 port 33702 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:44.228824 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:44.232467 systemd-logind[1285]: New session 21 of user core. Jul 10 00:36:44.233437 systemd[1]: Started session-21.scope. Jul 10 00:36:44.339570 sshd[3682]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:44.341857 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:33702.service: Deactivated successfully. Jul 10 00:36:44.342792 systemd-logind[1285]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:36:44.342812 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:36:44.343464 systemd-logind[1285]: Removed session 21. Jul 10 00:36:46.161253 kubelet[2059]: E0710 00:36:46.161204 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:49.342616 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:33710.service. Jul 10 00:36:49.379345 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 33710 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:49.380181 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:49.383767 systemd-logind[1285]: New session 22 of user core. Jul 10 00:36:49.384787 systemd[1]: Started session-22.scope. Jul 10 00:36:49.492090 sshd[3696]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:49.494448 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:33710.service: Deactivated successfully. Jul 10 00:36:49.495493 systemd-logind[1285]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:36:49.495562 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:36:49.496295 systemd-logind[1285]: Removed session 22. Jul 10 00:36:50.160975 kubelet[2059]: E0710 00:36:50.160917 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:51.160572 kubelet[2059]: E0710 00:36:51.160529 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:36:54.495048 systemd[1]: Started sshd@22-10.0.0.23:22-10.0.0.1:46146.service. Jul 10 00:36:54.527284 sshd[3710]: Accepted publickey for core from 10.0.0.1 port 46146 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:54.528388 sshd[3710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:54.531634 systemd-logind[1285]: New session 23 of user core. Jul 10 00:36:54.532427 systemd[1]: Started session-23.scope. Jul 10 00:36:54.640358 sshd[3710]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:54.643376 systemd[1]: Started sshd@23-10.0.0.23:22-10.0.0.1:46148.service. Jul 10 00:36:54.643928 systemd[1]: sshd@22-10.0.0.23:22-10.0.0.1:46146.service: Deactivated successfully. Jul 10 00:36:54.645005 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:36:54.645526 systemd-logind[1285]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:36:54.646479 systemd-logind[1285]: Removed session 23. Jul 10 00:36:54.678722 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 46148 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:54.680165 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:54.683930 systemd-logind[1285]: New session 24 of user core. Jul 10 00:36:54.684860 systemd[1]: Started session-24.scope. Jul 10 00:36:56.348330 env[1297]: time="2025-07-10T00:36:56.348265081Z" level=info msg="StopContainer for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" with timeout 30 (s)" Jul 10 00:36:56.349039 env[1297]: time="2025-07-10T00:36:56.348995862Z" level=info msg="Stop container \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" with signal terminated" Jul 10 00:36:56.376952 env[1297]: time="2025-07-10T00:36:56.376881600Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:36:56.383802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914-rootfs.mount: Deactivated successfully. Jul 10 00:36:56.386245 env[1297]: time="2025-07-10T00:36:56.386191777Z" level=info msg="StopContainer for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" with timeout 2 (s)" Jul 10 00:36:56.386525 env[1297]: time="2025-07-10T00:36:56.386496323Z" level=info msg="Stop container \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" with signal terminated" Jul 10 00:36:56.389957 env[1297]: time="2025-07-10T00:36:56.389918365Z" level=info msg="shim disconnected" id=9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914 Jul 10 00:36:56.390031 env[1297]: time="2025-07-10T00:36:56.389963039Z" level=warning msg="cleaning up after shim disconnected" id=9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914 namespace=k8s.io Jul 10 00:36:56.390031 env[1297]: time="2025-07-10T00:36:56.389973269Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.393783 systemd-networkd[1081]: lxc_health: Link DOWN Jul 10 00:36:56.393796 systemd-networkd[1081]: lxc_health: Lost carrier Jul 10 00:36:56.397227 env[1297]: time="2025-07-10T00:36:56.397180108Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.400055 env[1297]: time="2025-07-10T00:36:56.400017524Z" level=info msg="StopContainer for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" returns successfully" Jul 10 00:36:56.400870 env[1297]: time="2025-07-10T00:36:56.400721566Z" level=info msg="StopPodSandbox for \"bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c\"" Jul 10 00:36:56.400951 env[1297]: time="2025-07-10T00:36:56.400871760Z" level=info msg="Container to stop \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.402946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c-shm.mount: Deactivated successfully. Jul 10 00:36:56.437262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c-rootfs.mount: Deactivated successfully. Jul 10 00:36:56.444612 env[1297]: time="2025-07-10T00:36:56.444551798Z" level=info msg="shim disconnected" id=62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696 Jul 10 00:36:56.444612 env[1297]: time="2025-07-10T00:36:56.444595771Z" level=warning msg="cleaning up after shim disconnected" id=62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696 namespace=k8s.io Jul 10 00:36:56.444612 env[1297]: time="2025-07-10T00:36:56.444603976Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.444930 env[1297]: time="2025-07-10T00:36:56.444731488Z" level=info msg="shim disconnected" id=bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c Jul 10 00:36:56.444930 env[1297]: time="2025-07-10T00:36:56.444752327Z" level=warning msg="cleaning up after shim disconnected" id=bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c namespace=k8s.io Jul 10 00:36:56.444930 env[1297]: time="2025-07-10T00:36:56.444838470Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.444629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696-rootfs.mount: Deactivated successfully. Jul 10 00:36:56.451428 env[1297]: time="2025-07-10T00:36:56.451400160Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3828 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.453905 env[1297]: time="2025-07-10T00:36:56.453870411Z" level=info msg="StopContainer for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" returns successfully" Jul 10 00:36:56.454446 env[1297]: time="2025-07-10T00:36:56.454407928Z" level=info msg="StopPodSandbox for \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\"" Jul 10 00:36:56.454507 env[1297]: time="2025-07-10T00:36:56.454484392Z" level=info msg="Container to stop \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.454543 env[1297]: time="2025-07-10T00:36:56.454506304Z" level=info msg="Container to stop \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.454543 env[1297]: time="2025-07-10T00:36:56.454523786Z" level=info msg="Container to stop \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.454617 env[1297]: time="2025-07-10T00:36:56.454543144Z" level=info msg="Container to stop \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.454617 env[1297]: time="2025-07-10T00:36:56.454561658Z" level=info msg="Container to stop \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:36:56.454812 env[1297]: time="2025-07-10T00:36:56.454699809Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.455105 env[1297]: time="2025-07-10T00:36:56.455063167Z" level=info msg="TearDown network for sandbox \"bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c\" successfully" Jul 10 00:36:56.455105 env[1297]: time="2025-07-10T00:36:56.455100017Z" level=info msg="StopPodSandbox for \"bcdf9c691c58b1f618493c35f33484825aec1b54de032b0fbcc07751f39cae7c\" returns successfully" Jul 10 00:36:56.479401 env[1297]: time="2025-07-10T00:36:56.479348555Z" level=info msg="shim disconnected" id=a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af Jul 10 00:36:56.479401 env[1297]: time="2025-07-10T00:36:56.479397408Z" level=warning msg="cleaning up after shim disconnected" id=a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af namespace=k8s.io Jul 10 00:36:56.479401 env[1297]: time="2025-07-10T00:36:56.479406585Z" level=info msg="cleaning up dead shim" Jul 10 00:36:56.486636 env[1297]: time="2025-07-10T00:36:56.486576855Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:36:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3871 runtime=io.containerd.runc.v2\n" Jul 10 00:36:56.486999 env[1297]: time="2025-07-10T00:36:56.486965991Z" level=info msg="TearDown network for sandbox \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" successfully" Jul 10 00:36:56.486999 env[1297]: time="2025-07-10T00:36:56.486990999Z" level=info msg="StopPodSandbox for \"a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af\" returns successfully" Jul 10 00:36:56.530848 kubelet[2059]: I0710 00:36:56.530812 2059 scope.go:117] "RemoveContainer" containerID="9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914" Jul 10 00:36:56.532170 env[1297]: time="2025-07-10T00:36:56.532106732Z" level=info msg="RemoveContainer for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\"" Jul 10 00:36:56.537532 env[1297]: time="2025-07-10T00:36:56.537488279Z" level=info msg="RemoveContainer for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" returns successfully" Jul 10 00:36:56.537729 kubelet[2059]: I0710 00:36:56.537704 2059 scope.go:117] "RemoveContainer" containerID="9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914" Jul 10 00:36:56.537981 env[1297]: time="2025-07-10T00:36:56.537909285Z" level=error msg="ContainerStatus for \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\": not found" Jul 10 00:36:56.538119 kubelet[2059]: E0710 00:36:56.538076 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\": not found" containerID="9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914" Jul 10 00:36:56.538199 kubelet[2059]: I0710 00:36:56.538127 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914"} err="failed to get container status \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d37880f3a27a33bffc2c02baa4a35a9591832c6c5d570478a3a32944fbc3914\": not found" Jul 10 00:36:56.538236 kubelet[2059]: I0710 00:36:56.538201 2059 scope.go:117] "RemoveContainer" containerID="62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696" Jul 10 00:36:56.540058 env[1297]: time="2025-07-10T00:36:56.540033072Z" level=info msg="RemoveContainer for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\"" Jul 10 00:36:56.543837 env[1297]: time="2025-07-10T00:36:56.543807701Z" level=info msg="RemoveContainer for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" returns successfully" Jul 10 00:36:56.543985 kubelet[2059]: I0710 00:36:56.543955 2059 scope.go:117] "RemoveContainer" containerID="3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e" Jul 10 00:36:56.544856 env[1297]: time="2025-07-10T00:36:56.544830465Z" level=info msg="RemoveContainer for \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\"" Jul 10 00:36:56.548008 env[1297]: time="2025-07-10T00:36:56.547977717Z" level=info msg="RemoveContainer for \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\" returns successfully" Jul 10 00:36:56.548185 kubelet[2059]: I0710 00:36:56.548158 2059 scope.go:117] "RemoveContainer" containerID="e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c" Jul 10 00:36:56.550128 env[1297]: time="2025-07-10T00:36:56.550082056Z" level=info msg="RemoveContainer for \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\"" Jul 10 00:36:56.552944 env[1297]: time="2025-07-10T00:36:56.552899725Z" level=info msg="RemoveContainer for \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\" returns successfully" Jul 10 00:36:56.553105 kubelet[2059]: I0710 00:36:56.553052 2059 scope.go:117] "RemoveContainer" containerID="f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942" Jul 10 00:36:56.554266 env[1297]: time="2025-07-10T00:36:56.554229580Z" level=info msg="RemoveContainer for \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\"" Jul 10 00:36:56.557257 env[1297]: time="2025-07-10T00:36:56.557225366Z" level=info msg="RemoveContainer for \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\" returns successfully" Jul 10 00:36:56.557425 kubelet[2059]: I0710 00:36:56.557395 2059 scope.go:117] "RemoveContainer" containerID="b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47" Jul 10 00:36:56.558241 env[1297]: time="2025-07-10T00:36:56.558211210Z" level=info msg="RemoveContainer for \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\"" Jul 10 00:36:56.561229 env[1297]: time="2025-07-10T00:36:56.561206125Z" level=info msg="RemoveContainer for \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\" returns successfully" Jul 10 00:36:56.561397 kubelet[2059]: I0710 00:36:56.561371 2059 scope.go:117] "RemoveContainer" containerID="62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696" Jul 10 00:36:56.561639 env[1297]: time="2025-07-10T00:36:56.561580012Z" level=error msg="ContainerStatus for \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\": not found" Jul 10 00:36:56.561787 kubelet[2059]: E0710 00:36:56.561740 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\": not found" containerID="62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696" Jul 10 00:36:56.561855 kubelet[2059]: I0710 00:36:56.561797 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696"} err="failed to get container status \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\": rpc error: code = NotFound desc = an error occurred when try to find container \"62dc0fcfb62ed74db1f7444c4524a4c273d038d35f8e83074ccadc33a13b6696\": not found" Jul 10 00:36:56.561855 kubelet[2059]: I0710 00:36:56.561826 2059 scope.go:117] "RemoveContainer" containerID="3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e" Jul 10 00:36:56.562018 env[1297]: time="2025-07-10T00:36:56.561983324Z" level=error msg="ContainerStatus for \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\": not found" Jul 10 00:36:56.562144 kubelet[2059]: E0710 00:36:56.562121 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\": not found" containerID="3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e" Jul 10 00:36:56.562194 kubelet[2059]: I0710 00:36:56.562144 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e"} err="failed to get container status \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f59b99763fd87721febee8c0f7280d9117b52540ddee11237c784f6b93db91e\": not found" Jul 10 00:36:56.562194 kubelet[2059]: I0710 00:36:56.562160 2059 scope.go:117] "RemoveContainer" containerID="e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c" Jul 10 00:36:56.562332 env[1297]: time="2025-07-10T00:36:56.562292208Z" level=error msg="ContainerStatus for \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\": not found" Jul 10 00:36:56.562473 kubelet[2059]: E0710 00:36:56.562404 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\": not found" containerID="e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c" Jul 10 00:36:56.562473 kubelet[2059]: I0710 00:36:56.562429 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c"} err="failed to get container status \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e19eab86b281a655a828d38701dc5336422c7c0af1309dc48f5d0b5e4c17a36c\": not found" Jul 10 00:36:56.562473 kubelet[2059]: I0710 00:36:56.562449 2059 scope.go:117] "RemoveContainer" containerID="f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942" Jul 10 00:36:56.562674 env[1297]: time="2025-07-10T00:36:56.562600271Z" level=error msg="ContainerStatus for \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\": not found" Jul 10 00:36:56.562743 kubelet[2059]: E0710 00:36:56.562724 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\": not found" containerID="f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942" Jul 10 00:36:56.562790 kubelet[2059]: I0710 00:36:56.562753 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942"} err="failed to get container status \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\": rpc error: code = NotFound desc = an error occurred when try to find container \"f79b55fec1a61591ef3ff93e70d378ced4a95432e18a6ba4c0e09a8f2daaa942\": not found" Jul 10 00:36:56.562790 kubelet[2059]: I0710 00:36:56.562783 2059 scope.go:117] "RemoveContainer" containerID="b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47" Jul 10 00:36:56.562995 env[1297]: time="2025-07-10T00:36:56.562927009Z" level=error msg="ContainerStatus for \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\": not found" Jul 10 00:36:56.563154 kubelet[2059]: E0710 00:36:56.563053 2059 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\": not found" containerID="b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47" Jul 10 00:36:56.563154 kubelet[2059]: I0710 00:36:56.563077 2059 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47"} err="failed to get container status \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\": rpc error: code = NotFound desc = an error occurred when try to find container \"b977a79a19520e777ee0666063c3657c759c32969d585446a387713d078dfd47\": not found" Jul 10 00:36:56.573327 kubelet[2059]: I0710 00:36:56.573283 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzrz\" (UniqueName: \"kubernetes.io/projected/13d07cbe-f32d-46a1-b838-f23edab289eb-kube-api-access-9qzrz\") pod \"13d07cbe-f32d-46a1-b838-f23edab289eb\" (UID: \"13d07cbe-f32d-46a1-b838-f23edab289eb\") " Jul 10 00:36:56.573327 kubelet[2059]: I0710 00:36:56.573321 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13d07cbe-f32d-46a1-b838-f23edab289eb-cilium-config-path\") pod \"13d07cbe-f32d-46a1-b838-f23edab289eb\" (UID: \"13d07cbe-f32d-46a1-b838-f23edab289eb\") " Jul 10 00:36:56.575078 kubelet[2059]: I0710 00:36:56.575045 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13d07cbe-f32d-46a1-b838-f23edab289eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13d07cbe-f32d-46a1-b838-f23edab289eb" (UID: "13d07cbe-f32d-46a1-b838-f23edab289eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:36:56.575854 kubelet[2059]: I0710 00:36:56.575832 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d07cbe-f32d-46a1-b838-f23edab289eb-kube-api-access-9qzrz" (OuterVolumeSpecName: "kube-api-access-9qzrz") pod "13d07cbe-f32d-46a1-b838-f23edab289eb" (UID: "13d07cbe-f32d-46a1-b838-f23edab289eb"). InnerVolumeSpecName "kube-api-access-9qzrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:36:56.675972 kubelet[2059]: I0710 00:36:56.673685 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-net\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.675972 kubelet[2059]: I0710 00:36:56.673731 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34fc034b-0d53-4785-9302-649b7383823f-cilium-config-path\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.675972 kubelet[2059]: I0710 00:36:56.673751 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-etc-cni-netd\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.675972 kubelet[2059]: I0710 00:36:56.673795 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-xtables-lock\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.675972 kubelet[2059]: I0710 00:36:56.673821 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-run\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.675972 kubelet[2059]: I0710 00:36:56.673836 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-kube-api-access-tx4qh\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676251 kubelet[2059]: I0710 00:36:56.673848 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-hubble-tls\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676251 kubelet[2059]: I0710 00:36:56.673860 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-kernel\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676251 kubelet[2059]: I0710 00:36:56.673872 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-lib-modules\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676251 kubelet[2059]: I0710 00:36:56.673885 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cni-path\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676251 kubelet[2059]: I0710 00:36:56.673870 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676251 kubelet[2059]: I0710 00:36:56.673895 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-cgroup\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676405 kubelet[2059]: I0710 00:36:56.673998 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34fc034b-0d53-4785-9302-649b7383823f-clustermesh-secrets\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676405 kubelet[2059]: I0710 00:36:56.674023 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-hostproc\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676405 kubelet[2059]: I0710 00:36:56.674053 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-bpf-maps\") pod \"34fc034b-0d53-4785-9302-649b7383823f\" (UID: \"34fc034b-0d53-4785-9302-649b7383823f\") " Jul 10 00:36:56.676405 kubelet[2059]: I0710 00:36:56.674131 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13d07cbe-f32d-46a1-b838-f23edab289eb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.676405 kubelet[2059]: I0710 00:36:56.674144 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.676405 kubelet[2059]: I0710 00:36:56.674155 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzrz\" (UniqueName: \"kubernetes.io/projected/13d07cbe-f32d-46a1-b838-f23edab289eb-kube-api-access-9qzrz\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.676554 kubelet[2059]: I0710 00:36:56.673936 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676554 kubelet[2059]: I0710 00:36:56.673953 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676554 kubelet[2059]: I0710 00:36:56.673964 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676554 kubelet[2059]: I0710 00:36:56.673972 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676554 kubelet[2059]: I0710 00:36:56.674180 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676673 kubelet[2059]: I0710 00:36:56.675367 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-hostproc" (OuterVolumeSpecName: "hostproc") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676673 kubelet[2059]: I0710 00:36:56.675411 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676673 kubelet[2059]: I0710 00:36:56.675821 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34fc034b-0d53-4785-9302-649b7383823f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:36:56.676673 kubelet[2059]: I0710 00:36:56.675885 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676673 kubelet[2059]: I0710 00:36:56.675900 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cni-path" (OuterVolumeSpecName: "cni-path") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:56.676818 kubelet[2059]: I0710 00:36:56.676776 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-kube-api-access-tx4qh" (OuterVolumeSpecName: "kube-api-access-tx4qh") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "kube-api-access-tx4qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:36:56.678026 kubelet[2059]: I0710 00:36:56.677991 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:36:56.678026 kubelet[2059]: I0710 00:36:56.678009 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fc034b-0d53-4785-9302-649b7383823f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34fc034b-0d53-4785-9302-649b7383823f" (UID: "34fc034b-0d53-4785-9302-649b7383823f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:36:56.774708 kubelet[2059]: I0710 00:36:56.774654 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.774708 kubelet[2059]: I0710 00:36:56.774691 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.774708 kubelet[2059]: I0710 00:36:56.774704 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx4qh\" (UniqueName: \"kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-kube-api-access-tx4qh\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.774708 kubelet[2059]: I0710 00:36:56.774714 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34fc034b-0d53-4785-9302-649b7383823f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.774708 kubelet[2059]: I0710 00:36:56.774723 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774731 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774742 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774749 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774756 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34fc034b-0d53-4785-9302-649b7383823f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774781 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774788 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774795 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34fc034b-0d53-4785-9302-649b7383823f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:56.775054 kubelet[2059]: I0710 00:36:56.774803 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34fc034b-0d53-4785-9302-649b7383823f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:57.357210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af-rootfs.mount: Deactivated successfully. Jul 10 00:36:57.357366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3df855007cd3555939191a4e663b8b2423b9b66934c55d458daf6aca3c739af-shm.mount: Deactivated successfully. Jul 10 00:36:57.357481 systemd[1]: var-lib-kubelet-pods-13d07cbe\x2df32d\x2d46a1\x2db838\x2df23edab289eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9qzrz.mount: Deactivated successfully. Jul 10 00:36:57.357629 systemd[1]: var-lib-kubelet-pods-34fc034b\x2d0d53\x2d4785\x2d9302\x2d649b7383823f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtx4qh.mount: Deactivated successfully. Jul 10 00:36:57.357751 systemd[1]: var-lib-kubelet-pods-34fc034b\x2d0d53\x2d4785\x2d9302\x2d649b7383823f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:36:57.357875 systemd[1]: var-lib-kubelet-pods-34fc034b\x2d0d53\x2d4785\x2d9302\x2d649b7383823f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:36:58.161723 kubelet[2059]: I0710 00:36:58.161677 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d07cbe-f32d-46a1-b838-f23edab289eb" path="/var/lib/kubelet/pods/13d07cbe-f32d-46a1-b838-f23edab289eb/volumes" Jul 10 00:36:58.162103 kubelet[2059]: I0710 00:36:58.162026 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fc034b-0d53-4785-9302-649b7383823f" path="/var/lib/kubelet/pods/34fc034b-0d53-4785-9302-649b7383823f/volumes" Jul 10 00:36:58.175251 sshd[3723]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:58.178005 systemd[1]: Started sshd@24-10.0.0.23:22-10.0.0.1:46152.service. Jul 10 00:36:58.178582 systemd[1]: sshd@23-10.0.0.23:22-10.0.0.1:46148.service: Deactivated successfully. Jul 10 00:36:58.179531 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:36:58.179563 systemd-logind[1285]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:36:58.180494 systemd-logind[1285]: Removed session 24. Jul 10 00:36:58.213425 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 46152 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:58.214418 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:58.217917 systemd-logind[1285]: New session 25 of user core. Jul 10 00:36:58.218872 systemd[1]: Started session-25.scope. Jul 10 00:36:58.907951 sshd[3888]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:58.911103 systemd[1]: Started sshd@25-10.0.0.23:22-10.0.0.1:46166.service. Jul 10 00:36:58.911958 systemd[1]: sshd@24-10.0.0.23:22-10.0.0.1:46152.service: Deactivated successfully. Jul 10 00:36:58.913234 systemd-logind[1285]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:36:58.913346 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:36:58.914345 systemd-logind[1285]: Removed session 25. Jul 10 00:36:58.929961 kubelet[2059]: E0710 00:36:58.929932 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34fc034b-0d53-4785-9302-649b7383823f" containerName="cilium-agent" Jul 10 00:36:58.930153 kubelet[2059]: E0710 00:36:58.930137 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13d07cbe-f32d-46a1-b838-f23edab289eb" containerName="cilium-operator" Jul 10 00:36:58.930235 kubelet[2059]: E0710 00:36:58.930218 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34fc034b-0d53-4785-9302-649b7383823f" containerName="mount-bpf-fs" Jul 10 00:36:58.930311 kubelet[2059]: E0710 00:36:58.930297 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34fc034b-0d53-4785-9302-649b7383823f" containerName="mount-cgroup" Jul 10 00:36:58.930383 kubelet[2059]: E0710 00:36:58.930369 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34fc034b-0d53-4785-9302-649b7383823f" containerName="apply-sysctl-overwrites" Jul 10 00:36:58.930533 kubelet[2059]: E0710 00:36:58.930511 2059 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34fc034b-0d53-4785-9302-649b7383823f" containerName="clean-cilium-state" Jul 10 00:36:58.930654 kubelet[2059]: I0710 00:36:58.930639 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d07cbe-f32d-46a1-b838-f23edab289eb" containerName="cilium-operator" Jul 10 00:36:58.930734 kubelet[2059]: I0710 00:36:58.930720 2059 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fc034b-0d53-4785-9302-649b7383823f" containerName="cilium-agent" Jul 10 00:36:58.959243 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 46166 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:58.960546 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:58.965308 systemd-logind[1285]: New session 26 of user core. Jul 10 00:36:58.965647 systemd[1]: Started session-26.scope. Jul 10 00:36:59.086880 kubelet[2059]: I0710 00:36:59.086833 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-bpf-maps\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087068 kubelet[2059]: I0710 00:36:59.086899 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-cgroup\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087068 kubelet[2059]: I0710 00:36:59.086929 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-lib-modules\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087068 kubelet[2059]: I0710 00:36:59.086947 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-xtables-lock\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087068 kubelet[2059]: I0710 00:36:59.086968 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-hostproc\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087068 kubelet[2059]: I0710 00:36:59.087015 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-clustermesh-secrets\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087068 kubelet[2059]: I0710 00:36:59.087043 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-config-path\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087257 kubelet[2059]: I0710 00:36:59.087082 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-ipsec-secrets\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087257 kubelet[2059]: I0710 00:36:59.087096 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-kernel\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087257 kubelet[2059]: I0710 00:36:59.087113 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-etc-cni-netd\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087257 kubelet[2059]: I0710 00:36:59.087125 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg776\" (UniqueName: \"kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-kube-api-access-kg776\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087257 kubelet[2059]: I0710 00:36:59.087146 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-run\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087257 kubelet[2059]: I0710 00:36:59.087158 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cni-path\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087397 kubelet[2059]: I0710 00:36:59.087180 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-net\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.087397 kubelet[2059]: I0710 00:36:59.087193 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-hubble-tls\") pod \"cilium-lwnr6\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " pod="kube-system/cilium-lwnr6" Jul 10 00:36:59.090686 sshd[3900]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:59.093527 systemd[1]: Started sshd@26-10.0.0.23:22-10.0.0.1:46180.service. Jul 10 00:36:59.095193 systemd-logind[1285]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:36:59.095876 systemd[1]: sshd@25-10.0.0.23:22-10.0.0.1:46166.service: Deactivated successfully. Jul 10 00:36:59.096491 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:36:59.097301 systemd-logind[1285]: Removed session 26. Jul 10 00:36:59.105358 kubelet[2059]: E0710 00:36:59.105275 2059 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-kg776 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-lwnr6" podUID="55170955-2d0f-4a78-89e3-944b3c48879f" Jul 10 00:36:59.127979 sshd[3915]: Accepted publickey for core from 10.0.0.1 port 46180 ssh2: RSA SHA256:suUhWV759MqU0C+Dl6JG8TPW8PqnqlsB4qushdi9Ejw Jul 10 00:36:59.129125 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:59.132816 systemd-logind[1285]: New session 27 of user core. Jul 10 00:36:59.133695 systemd[1]: Started session-27.scope. Jul 10 00:36:59.692353 kubelet[2059]: I0710 00:36:59.692295 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cni-path\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692353 kubelet[2059]: I0710 00:36:59.692352 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-etc-cni-netd\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692744 kubelet[2059]: I0710 00:36:59.692371 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-run\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692744 kubelet[2059]: I0710 00:36:59.692395 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg776\" (UniqueName: \"kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-kube-api-access-kg776\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692744 kubelet[2059]: I0710 00:36:59.692415 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-kernel\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692744 kubelet[2059]: I0710 00:36:59.692432 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-cgroup\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692744 kubelet[2059]: I0710 00:36:59.692425 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cni-path" (OuterVolumeSpecName: "cni-path") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.692744 kubelet[2059]: I0710 00:36:59.692454 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-lib-modules\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692920 kubelet[2059]: I0710 00:36:59.692475 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-ipsec-secrets\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692920 kubelet[2059]: I0710 00:36:59.692493 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-net\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692920 kubelet[2059]: I0710 00:36:59.692483 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.692920 kubelet[2059]: I0710 00:36:59.692515 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-config-path\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692920 kubelet[2059]: I0710 00:36:59.692595 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-xtables-lock\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.692920 kubelet[2059]: I0710 00:36:59.692625 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-clustermesh-secrets\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.693061 kubelet[2059]: I0710 00:36:59.692652 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-bpf-maps\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.693061 kubelet[2059]: I0710 00:36:59.692670 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-hostproc\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.693061 kubelet[2059]: I0710 00:36:59.692692 2059 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-hubble-tls\") pod \"55170955-2d0f-4a78-89e3-944b3c48879f\" (UID: \"55170955-2d0f-4a78-89e3-944b3c48879f\") " Jul 10 00:36:59.693061 kubelet[2059]: I0710 00:36:59.692793 2059 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.693061 kubelet[2059]: I0710 00:36:59.692807 2059 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.693351 kubelet[2059]: I0710 00:36:59.693323 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693402 kubelet[2059]: I0710 00:36:59.693362 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693402 kubelet[2059]: I0710 00:36:59.693385 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693449 kubelet[2059]: I0710 00:36:59.693406 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-hostproc" (OuterVolumeSpecName: "hostproc") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693449 kubelet[2059]: I0710 00:36:59.693425 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693449 kubelet[2059]: I0710 00:36:59.693441 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693513 kubelet[2059]: I0710 00:36:59.693460 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.693608 kubelet[2059]: I0710 00:36:59.693588 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:36:59.694553 kubelet[2059]: I0710 00:36:59.694526 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:36:59.696328 kubelet[2059]: I0710 00:36:59.695665 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-kube-api-access-kg776" (OuterVolumeSpecName: "kube-api-access-kg776") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "kube-api-access-kg776". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:36:59.696884 kubelet[2059]: I0710 00:36:59.696857 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:36:59.697062 systemd[1]: var-lib-kubelet-pods-55170955\x2d2d0f\x2d4a78\x2d89e3\x2d944b3c48879f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkg776.mount: Deactivated successfully. Jul 10 00:36:59.697187 systemd[1]: var-lib-kubelet-pods-55170955\x2d2d0f\x2d4a78\x2d89e3\x2d944b3c48879f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:36:59.698208 kubelet[2059]: I0710 00:36:59.698177 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:36:59.698896 kubelet[2059]: I0710 00:36:59.698868 2059 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "55170955-2d0f-4a78-89e3-944b3c48879f" (UID: "55170955-2d0f-4a78-89e3-944b3c48879f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:36:59.699388 systemd[1]: var-lib-kubelet-pods-55170955\x2d2d0f\x2d4a78\x2d89e3\x2d944b3c48879f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:36:59.699482 systemd[1]: var-lib-kubelet-pods-55170955\x2d2d0f\x2d4a78\x2d89e3\x2d944b3c48879f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793576 2059 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793610 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793618 2059 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg776\" (UniqueName: \"kubernetes.io/projected/55170955-2d0f-4a78-89e3-944b3c48879f-kube-api-access-kg776\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793630 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793638 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793645 2059 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793630 kubelet[2059]: I0710 00:36:59.793652 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793965 kubelet[2059]: I0710 00:36:59.793660 2059 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793965 kubelet[2059]: I0710 00:36:59.793667 2059 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55170955-2d0f-4a78-89e3-944b3c48879f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793965 kubelet[2059]: I0710 00:36:59.793673 2059 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55170955-2d0f-4a78-89e3-944b3c48879f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793965 kubelet[2059]: I0710 00:36:59.793682 2059 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793965 kubelet[2059]: I0710 00:36:59.793689 2059 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:36:59.793965 kubelet[2059]: I0710 00:36:59.793696 2059 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55170955-2d0f-4a78-89e3-944b3c48879f-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:37:00.218429 kubelet[2059]: E0710 00:37:00.218386 2059 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:37:00.698848 kubelet[2059]: I0710 00:37:00.698810 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-host-proc-sys-net\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.698848 kubelet[2059]: I0710 00:37:00.698852 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/944ef211-e442-4245-b983-503b67a8f5cf-hubble-tls\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699245 kubelet[2059]: I0710 00:37:00.698869 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-bpf-maps\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699245 kubelet[2059]: I0710 00:37:00.698880 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-lib-modules\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699245 kubelet[2059]: I0710 00:37:00.698892 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-host-proc-sys-kernel\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699245 kubelet[2059]: I0710 00:37:00.698904 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-cilium-cgroup\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699245 kubelet[2059]: I0710 00:37:00.698916 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/944ef211-e442-4245-b983-503b67a8f5cf-clustermesh-secrets\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699245 kubelet[2059]: I0710 00:37:00.698928 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-cni-path\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699378 kubelet[2059]: I0710 00:37:00.698942 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-etc-cni-netd\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699378 kubelet[2059]: I0710 00:37:00.698954 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/944ef211-e442-4245-b983-503b67a8f5cf-cilium-ipsec-secrets\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699378 kubelet[2059]: I0710 00:37:00.699019 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-xtables-lock\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699378 kubelet[2059]: I0710 00:37:00.699086 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/944ef211-e442-4245-b983-503b67a8f5cf-cilium-config-path\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699378 kubelet[2059]: I0710 00:37:00.699109 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-cilium-run\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699378 kubelet[2059]: I0710 00:37:00.699122 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/944ef211-e442-4245-b983-503b67a8f5cf-hostproc\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.699505 kubelet[2059]: I0710 00:37:00.699142 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hdt\" (UniqueName: \"kubernetes.io/projected/944ef211-e442-4245-b983-503b67a8f5cf-kube-api-access-g8hdt\") pod \"cilium-xnlv4\" (UID: \"944ef211-e442-4245-b983-503b67a8f5cf\") " pod="kube-system/cilium-xnlv4" Jul 10 00:37:00.903681 kubelet[2059]: E0710 00:37:00.903631 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:00.904263 env[1297]: time="2025-07-10T00:37:00.904201135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xnlv4,Uid:944ef211-e442-4245-b983-503b67a8f5cf,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:01.172256 env[1297]: time="2025-07-10T00:37:01.172197043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:01.172256 env[1297]: time="2025-07-10T00:37:01.172232290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:01.172256 env[1297]: time="2025-07-10T00:37:01.172242830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:01.172606 env[1297]: time="2025-07-10T00:37:01.172552795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9 pid=3947 runtime=io.containerd.runc.v2 Jul 10 00:37:01.200284 env[1297]: time="2025-07-10T00:37:01.200234699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xnlv4,Uid:944ef211-e442-4245-b983-503b67a8f5cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\"" Jul 10 00:37:01.201329 kubelet[2059]: E0710 00:37:01.201305 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:01.203981 env[1297]: time="2025-07-10T00:37:01.203948988Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:37:01.367094 env[1297]: time="2025-07-10T00:37:01.367008269Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a1800d5d15f95554d331939a93bf34696906ec2f59dea93f76f41f47b4fc0166\"" Jul 10 00:37:01.367573 env[1297]: time="2025-07-10T00:37:01.367543070Z" level=info msg="StartContainer for \"a1800d5d15f95554d331939a93bf34696906ec2f59dea93f76f41f47b4fc0166\"" Jul 10 00:37:01.408203 env[1297]: time="2025-07-10T00:37:01.408154017Z" level=info msg="StartContainer for \"a1800d5d15f95554d331939a93bf34696906ec2f59dea93f76f41f47b4fc0166\" returns successfully" Jul 10 00:37:01.545534 kubelet[2059]: E0710 00:37:01.545407 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:01.552231 env[1297]: time="2025-07-10T00:37:01.552172636Z" level=info msg="shim disconnected" id=a1800d5d15f95554d331939a93bf34696906ec2f59dea93f76f41f47b4fc0166 Jul 10 00:37:01.552231 env[1297]: time="2025-07-10T00:37:01.552219815Z" level=warning msg="cleaning up after shim disconnected" id=a1800d5d15f95554d331939a93bf34696906ec2f59dea93f76f41f47b4fc0166 namespace=k8s.io Jul 10 00:37:01.552231 env[1297]: time="2025-07-10T00:37:01.552230866Z" level=info msg="cleaning up dead shim" Jul 10 00:37:01.557874 env[1297]: time="2025-07-10T00:37:01.557829727Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4030 runtime=io.containerd.runc.v2\n" Jul 10 00:37:02.162128 kubelet[2059]: I0710 00:37:02.162093 2059 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55170955-2d0f-4a78-89e3-944b3c48879f" path="/var/lib/kubelet/pods/55170955-2d0f-4a78-89e3-944b3c48879f/volumes" Jul 10 00:37:02.173935 kubelet[2059]: I0710 00:37:02.173867 2059 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:37:02Z","lastTransitionTime":"2025-07-10T00:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:37:02.548379 kubelet[2059]: E0710 00:37:02.548258 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:02.550222 env[1297]: time="2025-07-10T00:37:02.550182798Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:37:02.563646 env[1297]: time="2025-07-10T00:37:02.563597165Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dceeed6ec2570118f391ad638345e95ffb36dedf6f43889a916648a98aa11bff\"" Jul 10 00:37:02.564354 env[1297]: time="2025-07-10T00:37:02.564295064Z" level=info msg="StartContainer for \"dceeed6ec2570118f391ad638345e95ffb36dedf6f43889a916648a98aa11bff\"" Jul 10 00:37:02.599220 env[1297]: time="2025-07-10T00:37:02.599168850Z" level=info msg="StartContainer for \"dceeed6ec2570118f391ad638345e95ffb36dedf6f43889a916648a98aa11bff\" returns successfully" Jul 10 00:37:02.621090 env[1297]: time="2025-07-10T00:37:02.621030279Z" level=info msg="shim disconnected" id=dceeed6ec2570118f391ad638345e95ffb36dedf6f43889a916648a98aa11bff Jul 10 00:37:02.621090 env[1297]: time="2025-07-10T00:37:02.621077358Z" level=warning msg="cleaning up after shim disconnected" id=dceeed6ec2570118f391ad638345e95ffb36dedf6f43889a916648a98aa11bff namespace=k8s.io Jul 10 00:37:02.621090 env[1297]: time="2025-07-10T00:37:02.621087046Z" level=info msg="cleaning up dead shim" Jul 10 00:37:02.626774 env[1297]: time="2025-07-10T00:37:02.626720931Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4092 runtime=io.containerd.runc.v2\n" Jul 10 00:37:02.804610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dceeed6ec2570118f391ad638345e95ffb36dedf6f43889a916648a98aa11bff-rootfs.mount: Deactivated successfully. Jul 10 00:37:03.551890 kubelet[2059]: E0710 00:37:03.551862 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:03.553505 env[1297]: time="2025-07-10T00:37:03.553471918Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:37:03.568658 env[1297]: time="2025-07-10T00:37:03.568601572Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"391ce76cd1fcf07c0fedbca0a111b911b27423825aae1e79913db607d011cc3b\"" Jul 10 00:37:03.569962 env[1297]: time="2025-07-10T00:37:03.569918861Z" level=info msg="StartContainer for \"391ce76cd1fcf07c0fedbca0a111b911b27423825aae1e79913db607d011cc3b\"" Jul 10 00:37:03.615248 env[1297]: time="2025-07-10T00:37:03.615190524Z" level=info msg="StartContainer for \"391ce76cd1fcf07c0fedbca0a111b911b27423825aae1e79913db607d011cc3b\" returns successfully" Jul 10 00:37:03.637275 env[1297]: time="2025-07-10T00:37:03.637222198Z" level=info msg="shim disconnected" id=391ce76cd1fcf07c0fedbca0a111b911b27423825aae1e79913db607d011cc3b Jul 10 00:37:03.637432 env[1297]: time="2025-07-10T00:37:03.637283334Z" level=warning msg="cleaning up after shim disconnected" id=391ce76cd1fcf07c0fedbca0a111b911b27423825aae1e79913db607d011cc3b namespace=k8s.io Jul 10 00:37:03.637432 env[1297]: time="2025-07-10T00:37:03.637293994Z" level=info msg="cleaning up dead shim" Jul 10 00:37:03.643453 env[1297]: time="2025-07-10T00:37:03.643424957Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4149 runtime=io.containerd.runc.v2\n" Jul 10 00:37:03.804597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-391ce76cd1fcf07c0fedbca0a111b911b27423825aae1e79913db607d011cc3b-rootfs.mount: Deactivated successfully. Jul 10 00:37:04.554934 kubelet[2059]: E0710 00:37:04.554904 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:04.556504 env[1297]: time="2025-07-10T00:37:04.556435990Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:37:04.573016 env[1297]: time="2025-07-10T00:37:04.572949105Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"126474b1aa6739481e00c9f4f3484961e39197e13ac8515c57ca693bd992b5d6\"" Jul 10 00:37:04.573486 env[1297]: time="2025-07-10T00:37:04.573460421Z" level=info msg="StartContainer for \"126474b1aa6739481e00c9f4f3484961e39197e13ac8515c57ca693bd992b5d6\"" Jul 10 00:37:04.615260 env[1297]: time="2025-07-10T00:37:04.615196954Z" level=info msg="StartContainer for \"126474b1aa6739481e00c9f4f3484961e39197e13ac8515c57ca693bd992b5d6\" returns successfully" Jul 10 00:37:04.770195 env[1297]: time="2025-07-10T00:37:04.770119405Z" level=info msg="shim disconnected" id=126474b1aa6739481e00c9f4f3484961e39197e13ac8515c57ca693bd992b5d6 Jul 10 00:37:04.770195 env[1297]: time="2025-07-10T00:37:04.770176192Z" level=warning msg="cleaning up after shim disconnected" id=126474b1aa6739481e00c9f4f3484961e39197e13ac8515c57ca693bd992b5d6 namespace=k8s.io Jul 10 00:37:04.770195 env[1297]: time="2025-07-10T00:37:04.770188846Z" level=info msg="cleaning up dead shim" Jul 10 00:37:04.778386 env[1297]: time="2025-07-10T00:37:04.778329244Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4203 runtime=io.containerd.runc.v2\n" Jul 10 00:37:04.804812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-126474b1aa6739481e00c9f4f3484961e39197e13ac8515c57ca693bd992b5d6-rootfs.mount: Deactivated successfully. Jul 10 00:37:05.219599 kubelet[2059]: E0710 00:37:05.219554 2059 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:37:05.559333 kubelet[2059]: E0710 00:37:05.558977 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:05.561027 env[1297]: time="2025-07-10T00:37:05.560943234Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:37:05.802372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount438541290.mount: Deactivated successfully. Jul 10 00:37:06.099735 env[1297]: time="2025-07-10T00:37:06.099652602Z" level=info msg="CreateContainer within sandbox \"e5837724282edf0bf513e7ce1f18ac039a755ea36f4757a0a431a499b76ccdf9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"00b93412ca5fb61f89b9caeb8b6f1c59d4a440a7966916cde8b6209432722bde\"" Jul 10 00:37:06.100601 env[1297]: time="2025-07-10T00:37:06.100567800Z" level=info msg="StartContainer for \"00b93412ca5fb61f89b9caeb8b6f1c59d4a440a7966916cde8b6209432722bde\"" Jul 10 00:37:06.268552 env[1297]: time="2025-07-10T00:37:06.268499251Z" level=info msg="StartContainer for \"00b93412ca5fb61f89b9caeb8b6f1c59d4a440a7966916cde8b6209432722bde\" returns successfully" Jul 10 00:37:06.468807 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 10 00:37:06.563645 kubelet[2059]: E0710 00:37:06.563362 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:07.564745 kubelet[2059]: E0710 00:37:07.564713 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:07.848538 systemd[1]: run-containerd-runc-k8s.io-00b93412ca5fb61f89b9caeb8b6f1c59d4a440a7966916cde8b6209432722bde-runc.NJefsD.mount: Deactivated successfully. Jul 10 00:37:09.374200 systemd-networkd[1081]: lxc_health: Link UP Jul 10 00:37:09.383996 systemd-networkd[1081]: lxc_health: Gained carrier Jul 10 00:37:09.384973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:37:10.579917 systemd-networkd[1081]: lxc_health: Gained IPv6LL Jul 10 00:37:10.905717 kubelet[2059]: E0710 00:37:10.905541 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:10.920257 kubelet[2059]: I0710 00:37:10.919880 2059 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xnlv4" podStartSLOduration=10.919863894 podStartE2EDuration="10.919863894s" podCreationTimestamp="2025-07-10 00:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:07.020028684 +0000 UTC m=+96.941161074" watchObservedRunningTime="2025-07-10 00:37:10.919863894 +0000 UTC m=+100.840996254" Jul 10 00:37:11.575147 kubelet[2059]: E0710 00:37:11.575110 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.077945 systemd[1]: run-containerd-runc-k8s.io-00b93412ca5fb61f89b9caeb8b6f1c59d4a440a7966916cde8b6209432722bde-runc.Dn6OkF.mount: Deactivated successfully. Jul 10 00:37:12.117675 kubelet[2059]: E0710 00:37:12.117643 2059 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46372->127.0.0.1:45173: write tcp 127.0.0.1:46372->127.0.0.1:45173: write: broken pipe Jul 10 00:37:12.161123 kubelet[2059]: E0710 00:37:12.161089 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.576745 kubelet[2059]: E0710 00:37:12.576712 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:13.161014 kubelet[2059]: E0710 00:37:13.160959 2059 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:14.157851 systemd[1]: run-containerd-runc-k8s.io-00b93412ca5fb61f89b9caeb8b6f1c59d4a440a7966916cde8b6209432722bde-runc.DIhera.mount: Deactivated successfully. Jul 10 00:37:16.278017 sshd[3915]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:16.279888 systemd[1]: sshd@26-10.0.0.23:22-10.0.0.1:46180.service: Deactivated successfully. Jul 10 00:37:16.280702 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:37:16.280709 systemd-logind[1285]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:37:16.281587 systemd-logind[1285]: Removed session 27.