May 10 00:40:07.439013 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:40:07.439070 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:40:07.439081 kernel: BIOS-provided physical RAM map: May 10 00:40:07.439090 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 00:40:07.439095 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 00:40:07.439101 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 00:40:07.439108 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 10 00:40:07.439114 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 10 00:40:07.439121 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 00:40:07.439127 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 00:40:07.439148 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 00:40:07.439164 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 00:40:07.439170 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 10 00:40:07.439176 kernel: NX (Execute Disable) protection: active May 10 00:40:07.439197 kernel: SMBIOS 2.8 present. May 10 00:40:07.439204 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 10 00:40:07.439209 kernel: Hypervisor detected: KVM May 10 00:40:07.439218 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:40:07.439226 kernel: kvm-clock: cpu 0, msr 81196001, primary cpu clock May 10 00:40:07.439234 kernel: kvm-clock: using sched offset of 3473494040 cycles May 10 00:40:07.439259 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:40:07.439275 kernel: tsc: Detected 2794.748 MHz processor May 10 00:40:07.439295 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:40:07.439313 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:40:07.439321 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 10 00:40:07.439340 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:40:07.439353 kernel: Using GB pages for direct mapping May 10 00:40:07.439362 kernel: ACPI: Early table checksum verification disabled May 10 00:40:07.439388 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 10 00:40:07.439395 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439402 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439408 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439422 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 10 00:40:07.439442 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439449 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439456 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439462 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:40:07.439469 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 10 00:40:07.439475 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 10 00:40:07.439482 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 10 00:40:07.439510 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 10 00:40:07.439518 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 10 00:40:07.439524 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 10 00:40:07.439532 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 10 00:40:07.439541 kernel: No NUMA configuration found May 10 00:40:07.439555 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 10 00:40:07.439572 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 10 00:40:07.439579 kernel: Zone ranges: May 10 00:40:07.439586 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:40:07.439594 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 10 00:40:07.439601 kernel: Normal empty May 10 00:40:07.439607 kernel: Movable zone start for each node May 10 00:40:07.439623 kernel: Early memory node ranges May 10 00:40:07.439635 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 00:40:07.439642 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 10 00:40:07.439651 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 10 00:40:07.439661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:40:07.439669 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 00:40:07.439676 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 10 00:40:07.439683 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 00:40:07.439690 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:40:07.439697 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 00:40:07.439704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 00:40:07.439710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:40:07.439717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:40:07.439728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:40:07.439749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:40:07.439757 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:40:07.439764 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 10 00:40:07.439771 kernel: TSC deadline timer available May 10 00:40:07.439789 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 10 00:40:07.439796 kernel: kvm-guest: KVM setup pv remote TLB flush May 10 00:40:07.439802 kernel: kvm-guest: setup PV sched yield May 10 00:40:07.439809 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 00:40:07.439820 kernel: Booting paravirtualized kernel on KVM May 10 00:40:07.439827 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:40:07.439834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 10 00:40:07.439841 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 10 00:40:07.439848 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 10 00:40:07.439868 kernel: pcpu-alloc: [0] 0 1 2 3 May 10 00:40:07.439876 kernel: kvm-guest: setup async PF for cpu 0 May 10 00:40:07.439883 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 10 00:40:07.439890 kernel: kvm-guest: PV spinlocks enabled May 10 00:40:07.439899 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:40:07.439906 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 10 00:40:07.439913 kernel: Policy zone: DMA32 May 10 00:40:07.439921 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:40:07.439928 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:40:07.439935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:40:07.439942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:40:07.439949 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:40:07.439966 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 10 00:40:07.439978 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 10 00:40:07.439985 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:40:07.439992 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:40:07.439999 kernel: rcu: Hierarchical RCU implementation. May 10 00:40:07.440006 kernel: rcu: RCU event tracing is enabled. May 10 00:40:07.440014 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 10 00:40:07.440021 kernel: Rude variant of Tasks RCU enabled. May 10 00:40:07.440028 kernel: Tracing variant of Tasks RCU enabled. May 10 00:40:07.440037 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:40:07.440044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 10 00:40:07.440051 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 10 00:40:07.440057 kernel: random: crng init done May 10 00:40:07.440064 kernel: Console: colour VGA+ 80x25 May 10 00:40:07.440071 kernel: printk: console [ttyS0] enabled May 10 00:40:07.440091 kernel: ACPI: Core revision 20210730 May 10 00:40:07.440098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 10 00:40:07.440105 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:40:07.440113 kernel: x2apic enabled May 10 00:40:07.440120 kernel: Switched APIC routing to physical x2apic. May 10 00:40:07.440130 kernel: kvm-guest: setup PV IPIs May 10 00:40:07.440137 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 10 00:40:07.440144 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 10 00:40:07.440153 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 10 00:40:07.440173 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 00:40:07.440180 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 10 00:40:07.440187 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 10 00:40:07.440202 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:40:07.440209 kernel: Spectre V2 : Mitigation: Retpolines May 10 00:40:07.440216 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:40:07.440225 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 10 00:40:07.440232 kernel: RETBleed: Mitigation: untrained return thunk May 10 00:40:07.440239 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 00:40:07.440246 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 00:40:07.440254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:40:07.440274 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:40:07.440283 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:40:07.440290 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:40:07.440298 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 00:40:07.440305 kernel: Freeing SMP alternatives memory: 32K May 10 00:40:07.440312 kernel: pid_max: default: 32768 minimum: 301 May 10 00:40:07.440319 kernel: LSM: Security Framework initializing May 10 00:40:07.440326 kernel: SELinux: Initializing. May 10 00:40:07.440348 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:40:07.440355 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:40:07.440363 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 10 00:40:07.440383 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 10 00:40:07.440392 kernel: ... version: 0 May 10 00:40:07.440407 kernel: ... bit width: 48 May 10 00:40:07.440419 kernel: ... generic registers: 6 May 10 00:40:07.440427 kernel: ... value mask: 0000ffffffffffff May 10 00:40:07.440434 kernel: ... max period: 00007fffffffffff May 10 00:40:07.440443 kernel: ... fixed-purpose events: 0 May 10 00:40:07.440451 kernel: ... event mask: 000000000000003f May 10 00:40:07.440458 kernel: signal: max sigframe size: 1776 May 10 00:40:07.440473 kernel: rcu: Hierarchical SRCU implementation. May 10 00:40:07.440485 kernel: smp: Bringing up secondary CPUs ... May 10 00:40:07.440492 kernel: x86: Booting SMP configuration: May 10 00:40:07.440499 kernel: .... node #0, CPUs: #1 May 10 00:40:07.440506 kernel: kvm-clock: cpu 1, msr 81196041, secondary cpu clock May 10 00:40:07.440514 kernel: kvm-guest: setup async PF for cpu 1 May 10 00:40:07.440521 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 10 00:40:07.440530 kernel: #2 May 10 00:40:07.440550 kernel: kvm-clock: cpu 2, msr 81196081, secondary cpu clock May 10 00:40:07.440557 kernel: kvm-guest: setup async PF for cpu 2 May 10 00:40:07.440564 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 10 00:40:07.440571 kernel: #3 May 10 00:40:07.440581 kernel: kvm-clock: cpu 3, msr 811960c1, secondary cpu clock May 10 00:40:07.440588 kernel: kvm-guest: setup async PF for cpu 3 May 10 00:40:07.440607 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 10 00:40:07.440615 kernel: smp: Brought up 1 node, 4 CPUs May 10 00:40:07.440624 kernel: smpboot: Max logical packages: 1 May 10 00:40:07.440631 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 10 00:40:07.440639 kernel: devtmpfs: initialized May 10 00:40:07.440646 kernel: x86/mm: Memory block size: 128MB May 10 00:40:07.440666 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:40:07.440673 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 10 00:40:07.440680 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:40:07.440688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:40:07.440695 kernel: audit: initializing netlink subsys (disabled) May 10 00:40:07.440716 kernel: audit: type=2000 audit(1746837606.027:1): state=initialized audit_enabled=0 res=1 May 10 00:40:07.440725 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:40:07.440732 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:40:07.440739 kernel: cpuidle: using governor menu May 10 00:40:07.440747 kernel: ACPI: bus type PCI registered May 10 00:40:07.440766 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:40:07.440783 kernel: dca service started, version 1.12.1 May 10 00:40:07.440793 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 00:40:07.440801 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 10 00:40:07.440811 kernel: PCI: Using configuration type 1 for base access May 10 00:40:07.440818 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:40:07.440825 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:40:07.440832 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:40:07.440852 kernel: ACPI: Added _OSI(Module Device) May 10 00:40:07.440859 kernel: ACPI: Added _OSI(Processor Device) May 10 00:40:07.440866 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:40:07.440873 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:40:07.440880 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:40:07.440890 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:40:07.440897 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:40:07.440917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:40:07.440924 kernel: ACPI: Interpreter enabled May 10 00:40:07.440931 kernel: ACPI: PM: (supports S0 S3 S5) May 10 00:40:07.440938 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:40:07.440946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:40:07.440953 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 00:40:07.440967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:40:07.441274 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:40:07.441364 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 10 00:40:07.441456 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 10 00:40:07.441465 kernel: PCI host bridge to bus 0000:00 May 10 00:40:07.441596 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:40:07.441680 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:40:07.441754 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:40:07.441837 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 10 00:40:07.441930 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 00:40:07.442008 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 10 00:40:07.442123 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:40:07.442232 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 00:40:07.442343 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 10 00:40:07.442439 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 10 00:40:07.442518 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 10 00:40:07.442593 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 10 00:40:07.442669 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 00:40:07.442769 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 10 00:40:07.442856 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 00:40:07.442934 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 10 00:40:07.443015 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 10 00:40:07.443112 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 10 00:40:07.443190 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 10 00:40:07.443266 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 10 00:40:07.443342 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 10 00:40:07.443454 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 10 00:40:07.443536 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 10 00:40:07.443608 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 10 00:40:07.443684 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 10 00:40:07.443760 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 10 00:40:07.443866 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 00:40:07.443943 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 00:40:07.444038 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 00:40:07.444122 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 10 00:40:07.444217 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 10 00:40:07.444332 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 00:40:07.444454 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 00:40:07.444464 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:40:07.444472 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:40:07.444479 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:40:07.444490 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:40:07.444509 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 00:40:07.444516 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 00:40:07.444524 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 00:40:07.444531 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 00:40:07.444538 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 00:40:07.444545 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 00:40:07.444552 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 00:40:07.444559 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 00:40:07.444566 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 00:40:07.444579 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 00:40:07.444586 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 00:40:07.444593 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 00:40:07.444600 kernel: iommu: Default domain type: Translated May 10 00:40:07.444607 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:40:07.444705 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 00:40:07.444791 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 00:40:07.444869 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 00:40:07.444884 kernel: vgaarb: loaded May 10 00:40:07.444892 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:40:07.444899 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:40:07.444906 kernel: PTP clock support registered May 10 00:40:07.444914 kernel: PCI: Using ACPI for IRQ routing May 10 00:40:07.444921 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:40:07.444928 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 00:40:07.444935 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 10 00:40:07.444942 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 10 00:40:07.444953 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 10 00:40:07.444960 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:40:07.444967 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:40:07.444975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:40:07.451457 kernel: pnp: PnP ACPI init May 10 00:40:07.451603 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 00:40:07.451615 kernel: pnp: PnP ACPI: found 6 devices May 10 00:40:07.451623 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:40:07.451636 kernel: NET: Registered PF_INET protocol family May 10 00:40:07.451644 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:40:07.451651 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:40:07.451659 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:40:07.451666 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:40:07.451673 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 10 00:40:07.451680 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:40:07.451687 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:40:07.451694 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:40:07.451703 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:40:07.451710 kernel: NET: Registered PF_XDP protocol family May 10 00:40:07.451790 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:40:07.451856 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:40:07.451919 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:40:07.451982 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 10 00:40:07.452046 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 00:40:07.452111 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 10 00:40:07.452127 kernel: PCI: CLS 0 bytes, default 64 May 10 00:40:07.452134 kernel: Initialise system trusted keyrings May 10 00:40:07.452141 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:40:07.452149 kernel: Key type asymmetric registered May 10 00:40:07.452157 kernel: Asymmetric key parser 'x509' registered May 10 00:40:07.452166 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:40:07.452175 kernel: io scheduler mq-deadline registered May 10 00:40:07.452184 kernel: io scheduler kyber registered May 10 00:40:07.452193 kernel: io scheduler bfq registered May 10 00:40:07.452202 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:40:07.452215 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 00:40:07.452223 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 00:40:07.452231 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 00:40:07.452241 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:40:07.452250 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:40:07.452259 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:40:07.452268 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:40:07.452276 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:40:07.455633 kernel: rtc_cmos 00:04: RTC can wake from S4 May 10 00:40:07.455658 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:40:07.455740 kernel: rtc_cmos 00:04: registered as rtc0 May 10 00:40:07.455823 kernel: rtc_cmos 00:04: setting system clock to 2025-05-10T00:40:06 UTC (1746837606) May 10 00:40:07.455891 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 10 00:40:07.455901 kernel: NET: Registered PF_INET6 protocol family May 10 00:40:07.455908 kernel: Segment Routing with IPv6 May 10 00:40:07.455916 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:40:07.455923 kernel: NET: Registered PF_PACKET protocol family May 10 00:40:07.455936 kernel: Key type dns_resolver registered May 10 00:40:07.455944 kernel: IPI shorthand broadcast: enabled May 10 00:40:07.455951 kernel: sched_clock: Marking stable (458004265, 113612437)->(640567402, -68950700) May 10 00:40:07.455958 kernel: registered taskstats version 1 May 10 00:40:07.455965 kernel: Loading compiled-in X.509 certificates May 10 00:40:07.455973 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:40:07.455980 kernel: Key type .fscrypt registered May 10 00:40:07.455987 kernel: Key type fscrypt-provisioning registered May 10 00:40:07.455994 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:40:07.456005 kernel: ima: Allocated hash algorithm: sha1 May 10 00:40:07.456012 kernel: ima: No architecture policies found May 10 00:40:07.456019 kernel: clk: Disabling unused clocks May 10 00:40:07.456031 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:40:07.456043 kernel: Write protecting the kernel read-only data: 28672k May 10 00:40:07.456057 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:40:07.456072 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:40:07.456090 kernel: Run /init as init process May 10 00:40:07.456112 kernel: with arguments: May 10 00:40:07.456126 kernel: /init May 10 00:40:07.456133 kernel: with environment: May 10 00:40:07.456140 kernel: HOME=/ May 10 00:40:07.456147 kernel: TERM=linux May 10 00:40:07.456154 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:40:07.456164 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:40:07.456174 systemd[1]: Detected virtualization kvm. May 10 00:40:07.456187 systemd[1]: Detected architecture x86-64. May 10 00:40:07.456195 systemd[1]: Running in initrd. May 10 00:40:07.456202 systemd[1]: No hostname configured, using default hostname. May 10 00:40:07.456210 systemd[1]: Hostname set to . May 10 00:40:07.456217 systemd[1]: Initializing machine ID from VM UUID. May 10 00:40:07.456225 systemd[1]: Queued start job for default target initrd.target. May 10 00:40:07.456233 systemd[1]: Started systemd-ask-password-console.path. May 10 00:40:07.456240 systemd[1]: Reached target cryptsetup.target. May 10 00:40:07.456249 systemd[1]: Reached target paths.target. May 10 00:40:07.456257 systemd[1]: Reached target slices.target. May 10 00:40:07.456283 systemd[1]: Reached target swap.target. May 10 00:40:07.456292 systemd[1]: Reached target timers.target. May 10 00:40:07.456300 systemd[1]: Listening on iscsid.socket. May 10 00:40:07.456308 systemd[1]: Listening on iscsiuio.socket. May 10 00:40:07.456317 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:40:07.456325 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:40:07.456333 systemd[1]: Listening on systemd-journald.socket. May 10 00:40:07.456345 systemd[1]: Listening on systemd-networkd.socket. May 10 00:40:07.456353 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:40:07.456361 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:40:07.456539 systemd[1]: Reached target sockets.target. May 10 00:40:07.456548 systemd[1]: Starting kmod-static-nodes.service... May 10 00:40:07.456556 systemd[1]: Finished network-cleanup.service. May 10 00:40:07.456568 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:40:07.456576 systemd[1]: Starting systemd-journald.service... May 10 00:40:07.456583 systemd[1]: Starting systemd-modules-load.service... May 10 00:40:07.456591 systemd[1]: Starting systemd-resolved.service... May 10 00:40:07.456599 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:40:07.456607 systemd[1]: Finished kmod-static-nodes.service. May 10 00:40:07.456615 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:40:07.456622 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:40:07.456630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:40:07.456639 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:40:07.456647 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:40:07.456655 kernel: audit: type=1130 audit(1746837607.117:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.456664 kernel: audit: type=1130 audit(1746837607.121:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.456671 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:40:07.456679 kernel: audit: type=1130 audit(1746837607.151:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.456687 systemd[1]: Starting dracut-cmdline.service... May 10 00:40:07.456696 systemd[1]: Started systemd-resolved.service. May 10 00:40:07.456704 systemd[1]: Reached target nss-lookup.target. May 10 00:40:07.456720 systemd-journald[199]: Journal started May 10 00:40:07.456784 systemd-journald[199]: Runtime Journal (/run/log/journal/28a8e9cfda3d46d28994fbf2691b43ce) is 6.0M, max 48.5M, 42.5M free. May 10 00:40:07.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.439345 systemd-modules-load[200]: Inserted module 'overlay' May 10 00:40:07.462347 kernel: audit: type=1130 audit(1746837607.455:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.462384 kernel: SCSI subsystem initialized May 10 00:40:07.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.449621 systemd-resolved[201]: Positive Trust Anchors: May 10 00:40:07.464881 systemd[1]: Started systemd-journald.service. May 10 00:40:07.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.449631 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:40:07.469463 kernel: audit: type=1130 audit(1746837607.464:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.449659 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:40:07.476925 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:40:07.453357 systemd-resolved[201]: Defaulting to hostname 'linux'. May 10 00:40:07.478715 kernel: Loading iSCSI transport class v2.0-870. May 10 00:40:07.478929 dracut-cmdline[214]: dracut-dracut-053 May 10 00:40:07.478929 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 10 00:40:07.478929 dracut-cmdline[214]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:40:07.485734 kernel: Bridge firewalling registered May 10 00:40:07.485641 systemd-modules-load[200]: Inserted module 'br_netfilter' May 10 00:40:07.498393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:40:07.498417 kernel: device-mapper: uevent: version 1.0.3 May 10 00:40:07.498432 kernel: iscsi: registered transport (tcp) May 10 00:40:07.498442 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:40:07.503018 systemd-modules-load[200]: Inserted module 'dm_multipath' May 10 00:40:07.503834 systemd[1]: Finished systemd-modules-load.service. May 10 00:40:07.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.505807 systemd[1]: Starting systemd-sysctl.service... May 10 00:40:07.510479 kernel: audit: type=1130 audit(1746837607.504:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.514872 systemd[1]: Finished systemd-sysctl.service. May 10 00:40:07.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.519398 kernel: audit: type=1130 audit(1746837607.514:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.524404 kernel: iscsi: registered transport (qla4xxx) May 10 00:40:07.524468 kernel: QLogic iSCSI HBA Driver May 10 00:40:07.561762 systemd[1]: Finished dracut-cmdline.service. May 10 00:40:07.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.564449 systemd[1]: Starting dracut-pre-udev.service... May 10 00:40:07.567939 kernel: audit: type=1130 audit(1746837607.563:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.619446 kernel: raid6: avx2x4 gen() 28309 MB/s May 10 00:40:07.636425 kernel: raid6: avx2x4 xor() 6252 MB/s May 10 00:40:07.653426 kernel: raid6: avx2x2 gen() 26005 MB/s May 10 00:40:07.670414 kernel: raid6: avx2x2 xor() 16396 MB/s May 10 00:40:07.687405 kernel: raid6: avx2x1 gen() 19979 MB/s May 10 00:40:07.704414 kernel: raid6: avx2x1 xor() 12626 MB/s May 10 00:40:07.721404 kernel: raid6: sse2x4 gen() 12698 MB/s May 10 00:40:07.738420 kernel: raid6: sse2x4 xor() 5328 MB/s May 10 00:40:07.755417 kernel: raid6: sse2x2 gen() 12058 MB/s May 10 00:40:07.792401 kernel: raid6: sse2x2 xor() 7038 MB/s May 10 00:40:07.809400 kernel: raid6: sse2x1 gen() 11793 MB/s May 10 00:40:07.826836 kernel: raid6: sse2x1 xor() 7746 MB/s May 10 00:40:07.826877 kernel: raid6: using algorithm avx2x4 gen() 28309 MB/s May 10 00:40:07.826891 kernel: raid6: .... xor() 6252 MB/s, rmw enabled May 10 00:40:07.827552 kernel: raid6: using avx2x2 recovery algorithm May 10 00:40:07.840404 kernel: xor: automatically using best checksumming function avx May 10 00:40:07.942422 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:40:07.952242 systemd[1]: Finished dracut-pre-udev.service. May 10 00:40:07.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.957000 audit: BPF prog-id=7 op=LOAD May 10 00:40:07.957000 audit: BPF prog-id=8 op=LOAD May 10 00:40:07.958398 kernel: audit: type=1130 audit(1746837607.954:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.958496 systemd[1]: Starting systemd-udevd.service... May 10 00:40:07.970609 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 10 00:40:07.974591 systemd[1]: Started systemd-udevd.service. May 10 00:40:07.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:07.977581 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:40:07.989291 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation May 10 00:40:08.017520 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:40:08.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:08.020308 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:40:08.063556 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:40:08.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:08.101937 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 10 00:40:08.110770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:40:08.110795 kernel: GPT:9289727 != 19775487 May 10 00:40:08.110812 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:40:08.110835 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:40:08.110849 kernel: GPT:9289727 != 19775487 May 10 00:40:08.110861 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:40:08.110874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:40:08.113388 kernel: libata version 3.00 loaded. May 10 00:40:08.125166 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:40:08.125220 kernel: AES CTR mode by8 optimization enabled May 10 00:40:08.125230 kernel: ahci 0000:00:1f.2: version 3.0 May 10 00:40:08.152991 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 00:40:08.153015 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 00:40:08.153113 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 00:40:08.153191 kernel: scsi host0: ahci May 10 00:40:08.153286 kernel: scsi host1: ahci May 10 00:40:08.153396 kernel: scsi host2: ahci May 10 00:40:08.153486 kernel: scsi host3: ahci May 10 00:40:08.153580 kernel: scsi host4: ahci May 10 00:40:08.153687 kernel: scsi host5: ahci May 10 00:40:08.153793 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 10 00:40:08.153803 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 10 00:40:08.153812 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 10 00:40:08.153823 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 10 00:40:08.153834 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 10 00:40:08.153846 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 10 00:40:08.147882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:40:08.204678 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) May 10 00:40:08.213599 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:40:08.223753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:40:08.229249 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:40:08.232047 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:40:08.239099 systemd[1]: Starting disk-uuid.service... May 10 00:40:08.248868 disk-uuid[528]: Primary Header is updated. May 10 00:40:08.248868 disk-uuid[528]: Secondary Entries is updated. May 10 00:40:08.248868 disk-uuid[528]: Secondary Header is updated. May 10 00:40:08.253406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:40:08.256388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:40:08.260391 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:40:08.461409 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 00:40:08.461483 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 00:40:08.462404 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 10 00:40:08.465049 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 10 00:40:08.465156 kernel: ata3.00: applying bridge limits May 10 00:40:08.465180 kernel: ata3.00: configured for UDMA/100 May 10 00:40:08.466402 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 00:40:08.510419 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 00:40:08.510498 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 00:40:08.511394 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 00:40:08.552416 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 10 00:40:08.569190 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:40:08.569210 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 10 00:40:09.282394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:40:09.282452 disk-uuid[529]: The operation has completed successfully. May 10 00:40:09.304517 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:40:09.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.304603 systemd[1]: Finished disk-uuid.service. May 10 00:40:09.314746 systemd[1]: Starting verity-setup.service... May 10 00:40:09.328415 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 10 00:40:09.348669 systemd[1]: Found device dev-mapper-usr.device. May 10 00:40:09.351210 systemd[1]: Mounting sysusr-usr.mount... May 10 00:40:09.353149 systemd[1]: Finished verity-setup.service. May 10 00:40:09.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.415278 systemd[1]: Mounted sysusr-usr.mount. May 10 00:40:09.416797 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:40:09.415856 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:40:09.416615 systemd[1]: Starting ignition-setup.service... May 10 00:40:09.420571 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:40:09.433010 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:40:09.433081 kernel: BTRFS info (device vda6): using free space tree May 10 00:40:09.433092 kernel: BTRFS info (device vda6): has skinny extents May 10 00:40:09.442663 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:40:09.485834 systemd[1]: Finished ignition-setup.service. May 10 00:40:09.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.488787 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:40:09.505150 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:40:09.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.506000 audit: BPF prog-id=9 op=LOAD May 10 00:40:09.507911 systemd[1]: Starting systemd-networkd.service... May 10 00:40:09.533775 systemd-networkd[716]: lo: Link UP May 10 00:40:09.533785 systemd-networkd[716]: lo: Gained carrier May 10 00:40:09.535871 systemd-networkd[716]: Enumeration completed May 10 00:40:09.536810 systemd[1]: Started systemd-networkd.service. May 10 00:40:09.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.538125 systemd[1]: Reached target network.target. May 10 00:40:09.538063 ignition[700]: Ignition 2.14.0 May 10 00:40:09.541150 systemd[1]: Starting iscsiuio.service... May 10 00:40:09.538070 ignition[700]: Stage: fetch-offline May 10 00:40:09.538155 ignition[700]: no configs at "/usr/lib/ignition/base.d" May 10 00:40:09.538164 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:40:09.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.546452 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:40:09.538275 ignition[700]: parsed url from cmdline: "" May 10 00:40:09.546635 systemd[1]: Started iscsiuio.service. May 10 00:40:09.538278 ignition[700]: no config URL provided May 10 00:40:09.549188 systemd[1]: Starting iscsid.service... May 10 00:40:09.538283 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:40:09.555665 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:40:09.555665 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 10 00:40:09.555665 iscsid[728]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:40:09.555665 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:40:09.555665 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:40:09.555665 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:40:09.555665 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:40:09.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.538291 ignition[700]: no config at "/usr/lib/ignition/user.ign" May 10 00:40:09.555761 systemd[1]: Started iscsid.service. May 10 00:40:09.538320 ignition[700]: op(1): [started] loading QEMU firmware config module May 10 00:40:09.557299 systemd[1]: Starting dracut-initqueue.service... May 10 00:40:09.538326 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" May 10 00:40:09.557751 systemd-networkd[716]: eth0: Link UP May 10 00:40:09.542424 ignition[700]: op(1): [finished] loading QEMU firmware config module May 10 00:40:09.557755 systemd-networkd[716]: eth0: Gained carrier May 10 00:40:09.542442 ignition[700]: QEMU firmware config was not found. Ignoring... May 10 00:40:09.569230 systemd[1]: Finished dracut-initqueue.service. May 10 00:40:09.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.571750 systemd[1]: Reached target remote-fs-pre.target. May 10 00:40:09.574075 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:40:09.575101 systemd[1]: Reached target remote-fs.target. May 10 00:40:09.576799 systemd[1]: Starting dracut-pre-mount.service... May 10 00:40:09.586164 systemd[1]: Finished dracut-pre-mount.service. May 10 00:40:09.622377 ignition[700]: parsing config with SHA512: e75560c7b7c903bfef971bb1f881f5fdccc0b6cccd10ae5255c700fb8458f11d63a9ddfce5a34a41284322f955ebed3b5fd925b085a9d0d2f6784c0c0bb5430f May 10 00:40:09.698178 unknown[700]: fetched base config from "system" May 10 00:40:09.698190 unknown[700]: fetched user config from "qemu" May 10 00:40:09.698701 ignition[700]: fetch-offline: fetch-offline passed May 10 00:40:09.698767 ignition[700]: Ignition finished successfully May 10 00:40:09.700631 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:40:09.703816 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:40:09.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.704913 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 10 00:40:09.705694 systemd[1]: Starting ignition-kargs.service... May 10 00:40:09.719953 ignition[742]: Ignition 2.14.0 May 10 00:40:09.719963 ignition[742]: Stage: kargs May 10 00:40:09.720052 ignition[742]: no configs at "/usr/lib/ignition/base.d" May 10 00:40:09.722596 systemd[1]: Finished ignition-kargs.service. May 10 00:40:09.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.720061 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:40:09.721288 ignition[742]: kargs: kargs passed May 10 00:40:09.725009 systemd[1]: Starting ignition-disks.service... May 10 00:40:09.721326 ignition[742]: Ignition finished successfully May 10 00:40:09.734434 ignition[748]: Ignition 2.14.0 May 10 00:40:09.734445 ignition[748]: Stage: disks May 10 00:40:09.734566 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 10 00:40:09.734575 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:40:09.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.736792 systemd[1]: Finished ignition-disks.service. May 10 00:40:09.736035 ignition[748]: disks: disks passed May 10 00:40:09.737851 systemd[1]: Reached target initrd-root-device.target. May 10 00:40:09.736074 ignition[748]: Ignition finished successfully May 10 00:40:09.739830 systemd[1]: Reached target local-fs-pre.target. May 10 00:40:09.740810 systemd[1]: Reached target local-fs.target. May 10 00:40:09.741715 systemd[1]: Reached target sysinit.target. May 10 00:40:09.742611 systemd[1]: Reached target basic.target. May 10 00:40:09.745156 systemd[1]: Starting systemd-fsck-root.service... May 10 00:40:09.777513 systemd-fsck[756]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 10 00:40:09.991743 systemd[1]: Finished systemd-fsck-root.service. May 10 00:40:09.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:09.993633 systemd[1]: Mounting sysroot.mount... May 10 00:40:10.023417 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:40:10.024109 systemd[1]: Mounted sysroot.mount. May 10 00:40:10.024976 systemd[1]: Reached target initrd-root-fs.target. May 10 00:40:10.027659 systemd[1]: Mounting sysroot-usr.mount... May 10 00:40:10.028606 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:40:10.028636 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:40:10.028655 systemd[1]: Reached target ignition-diskful.target. May 10 00:40:10.030641 systemd[1]: Mounted sysroot-usr.mount. May 10 00:40:10.032671 systemd[1]: Starting initrd-setup-root.service... May 10 00:40:10.037738 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:40:10.041935 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory May 10 00:40:10.045155 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:40:10.048925 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:40:10.087706 systemd[1]: Finished initrd-setup-root.service. May 10 00:40:10.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:10.093054 systemd[1]: Starting ignition-mount.service... May 10 00:40:10.094643 systemd[1]: Starting sysroot-boot.service... May 10 00:40:10.099839 bash[807]: umount: /sysroot/usr/share/oem: not mounted. May 10 00:40:10.109485 ignition[808]: INFO : Ignition 2.14.0 May 10 00:40:10.109485 ignition[808]: INFO : Stage: mount May 10 00:40:10.111961 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:40:10.111961 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:40:10.111961 ignition[808]: INFO : mount: mount passed May 10 00:40:10.111961 ignition[808]: INFO : Ignition finished successfully May 10 00:40:10.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:10.112098 systemd[1]: Finished ignition-mount.service. May 10 00:40:10.159428 systemd[1]: Finished sysroot-boot.service. May 10 00:40:10.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:10.363728 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:40:10.371404 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) May 10 00:40:10.371447 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:40:10.373265 kernel: BTRFS info (device vda6): using free space tree May 10 00:40:10.373278 kernel: BTRFS info (device vda6): has skinny extents May 10 00:40:10.377489 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:40:10.378638 systemd[1]: Starting ignition-files.service... May 10 00:40:10.397154 ignition[837]: INFO : Ignition 2.14.0 May 10 00:40:10.397154 ignition[837]: INFO : Stage: files May 10 00:40:10.398974 ignition[837]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:40:10.398974 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:40:10.398974 ignition[837]: DEBUG : files: compiled without relabeling support, skipping May 10 00:40:10.403012 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:40:10.403012 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:40:10.403012 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:40:10.403012 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:40:10.403012 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:40:10.402927 unknown[837]: wrote ssh authorized keys file for user: core May 10 00:40:10.412177 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 00:40:10.412177 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 10 00:40:10.412177 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:40:10.412177 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:40:10.454242 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:40:10.626928 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:40:10.626928 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:40:10.626928 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:40:11.121284 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 10 00:40:11.174580 systemd-networkd[716]: eth0: Gained IPv6LL May 10 00:40:11.408203 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:40:11.408203 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:40:11.412002 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 00:40:11.687760 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 10 00:40:12.333175 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 00:40:12.333175 ignition[837]: INFO : files: op(d): [started] processing unit "containerd.service" May 10 00:40:12.337487 ignition[837]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 00:40:12.340139 ignition[837]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 10 00:40:12.340139 ignition[837]: INFO : files: op(d): [finished] processing unit "containerd.service" May 10 00:40:12.340139 ignition[837]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 10 00:40:12.345479 ignition[837]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 10 00:40:12.525546 ignition[837]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 10 00:40:12.527640 ignition[837]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 10 00:40:12.527640 ignition[837]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:40:12.527640 ignition[837]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:40:12.527640 ignition[837]: INFO : files: files passed May 10 00:40:12.527640 ignition[837]: INFO : Ignition finished successfully May 10 00:40:12.535087 systemd[1]: Finished ignition-files.service. May 10 00:40:12.541757 kernel: kauditd_printk_skb: 23 callbacks suppressed May 10 00:40:12.541784 kernel: audit: type=1130 audit(1746837612.535:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.541903 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:40:12.542421 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:40:12.544552 systemd[1]: Starting ignition-quench.service... May 10 00:40:12.549048 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:40:12.549218 systemd[1]: Finished ignition-quench.service. May 10 00:40:12.560682 kernel: audit: type=1130 audit(1746837612.550:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.560711 kernel: audit: type=1131 audit(1746837612.550:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.560733 kernel: audit: type=1130 audit(1746837612.560:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.560838 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 10 00:40:12.554671 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:40:12.569881 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:40:12.560826 systemd[1]: Reached target ignition-complete.target. May 10 00:40:12.566803 systemd[1]: Starting initrd-parse-etc.service... May 10 00:40:12.584612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:40:12.584738 systemd[1]: Finished initrd-parse-etc.service. May 10 00:40:12.595916 kernel: audit: type=1130 audit(1746837612.586:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.595944 kernel: audit: type=1131 audit(1746837612.586:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.586932 systemd[1]: Reached target initrd-fs.target. May 10 00:40:12.595919 systemd[1]: Reached target initrd.target. May 10 00:40:12.596869 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:40:12.597934 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:40:12.613526 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:40:12.620194 kernel: audit: type=1130 audit(1746837612.614:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.615574 systemd[1]: Starting initrd-cleanup.service... May 10 00:40:12.627308 systemd[1]: Stopped target nss-lookup.target. May 10 00:40:12.628396 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:40:12.630147 systemd[1]: Stopped target timers.target. May 10 00:40:12.631776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:40:12.638947 kernel: audit: type=1131 audit(1746837612.632:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.631889 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:40:12.633407 systemd[1]: Stopped target initrd.target. May 10 00:40:12.639037 systemd[1]: Stopped target basic.target. May 10 00:40:12.640677 systemd[1]: Stopped target ignition-complete.target. May 10 00:40:12.642248 systemd[1]: Stopped target ignition-diskful.target. May 10 00:40:12.643873 systemd[1]: Stopped target initrd-root-device.target. May 10 00:40:12.645623 systemd[1]: Stopped target remote-fs.target. May 10 00:40:12.647333 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:40:12.649046 systemd[1]: Stopped target sysinit.target. May 10 00:40:12.650575 systemd[1]: Stopped target local-fs.target. May 10 00:40:12.652152 systemd[1]: Stopped target local-fs-pre.target. May 10 00:40:12.653753 systemd[1]: Stopped target swap.target. May 10 00:40:12.662301 kernel: audit: type=1131 audit(1746837612.656:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.655204 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:40:12.655329 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:40:12.669675 kernel: audit: type=1131 audit(1746837612.663:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.656885 systemd[1]: Stopped target cryptsetup.target. May 10 00:40:12.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.662413 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:40:12.662565 systemd[1]: Stopped dracut-initqueue.service. May 10 00:40:12.664338 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:40:12.664495 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:40:12.669876 systemd[1]: Stopped target paths.target. May 10 00:40:12.671338 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:40:12.675431 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:40:12.677007 systemd[1]: Stopped target slices.target. May 10 00:40:12.678880 systemd[1]: Stopped target sockets.target. May 10 00:40:12.680682 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:40:12.680779 systemd[1]: Closed iscsid.socket. May 10 00:40:12.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.682159 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:40:12.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.682260 systemd[1]: Closed iscsiuio.socket. May 10 00:40:12.683659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:40:12.683799 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:40:12.685546 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:40:12.685685 systemd[1]: Stopped ignition-files.service. May 10 00:40:12.688438 systemd[1]: Stopping ignition-mount.service... May 10 00:40:12.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.690442 systemd[1]: Stopping sysroot-boot.service... May 10 00:40:12.692337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:40:12.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.693524 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:40:12.695424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:40:12.695614 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:40:12.701147 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:40:12.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.701269 systemd[1]: Finished initrd-cleanup.service. May 10 00:40:12.708129 ignition[878]: INFO : Ignition 2.14.0 May 10 00:40:12.708129 ignition[878]: INFO : Stage: umount May 10 00:40:12.710215 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:40:12.710215 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:40:12.708963 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:40:12.714979 ignition[878]: INFO : umount: umount passed May 10 00:40:12.714979 ignition[878]: INFO : Ignition finished successfully May 10 00:40:12.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.711511 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:40:12.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.711622 systemd[1]: Stopped ignition-mount.service. May 10 00:40:12.715109 systemd[1]: Stopped target network.target. May 10 00:40:12.716619 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:40:12.716683 systemd[1]: Stopped ignition-disks.service. May 10 00:40:12.718596 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:40:12.718642 systemd[1]: Stopped ignition-kargs.service. May 10 00:40:12.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.719175 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:40:12.719225 systemd[1]: Stopped ignition-setup.service. May 10 00:40:12.719479 systemd[1]: Stopping systemd-networkd.service... May 10 00:40:12.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.720887 systemd[1]: Stopping systemd-resolved.service... May 10 00:40:12.728425 systemd-networkd[716]: eth0: DHCPv6 lease lost May 10 00:40:12.737000 audit: BPF prog-id=9 op=UNLOAD May 10 00:40:12.738000 audit: BPF prog-id=6 op=UNLOAD May 10 00:40:12.729573 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:40:12.729710 systemd[1]: Stopped systemd-networkd.service. May 10 00:40:12.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.733708 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:40:12.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.733824 systemd[1]: Stopped systemd-resolved.service. May 10 00:40:12.737782 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:40:12.737821 systemd[1]: Closed systemd-networkd.socket. May 10 00:40:12.740806 systemd[1]: Stopping network-cleanup.service... May 10 00:40:12.741765 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:40:12.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.741828 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:40:12.744077 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:40:12.744115 systemd[1]: Stopped systemd-sysctl.service. May 10 00:40:12.746682 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:40:12.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.746721 systemd[1]: Stopped systemd-modules-load.service. May 10 00:40:12.748159 systemd[1]: Stopping systemd-udevd.service... May 10 00:40:12.750558 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:40:12.754839 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:40:12.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.754943 systemd[1]: Stopped network-cleanup.service. May 10 00:40:12.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.759982 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:40:12.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.760123 systemd[1]: Stopped systemd-udevd.service. May 10 00:40:12.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.763099 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:40:12.763135 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:40:12.765193 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:40:12.765228 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:40:12.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.767281 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:40:12.767330 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:40:12.769953 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:40:12.770013 systemd[1]: Stopped dracut-cmdline.service. May 10 00:40:12.772165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:40:12.772199 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:40:12.773689 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:40:12.774934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:40:12.774979 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 10 00:40:12.777227 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:40:12.777273 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:40:12.778633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:40:12.778703 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:40:12.782268 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 00:40:12.782848 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:40:12.782941 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:40:12.831170 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:40:12.831319 systemd[1]: Stopped sysroot-boot.service. May 10 00:40:12.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.832178 systemd[1]: Reached target initrd-switch-root.target. May 10 00:40:12.834229 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:40:12.834272 systemd[1]: Stopped initrd-setup-root.service. May 10 00:40:12.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:12.840240 systemd[1]: Starting initrd-switch-root.service... May 10 00:40:12.850976 systemd[1]: Switching root. May 10 00:40:12.852000 audit: BPF prog-id=8 op=UNLOAD May 10 00:40:12.852000 audit: BPF prog-id=7 op=UNLOAD May 10 00:40:12.852000 audit: BPF prog-id=5 op=UNLOAD May 10 00:40:12.853000 audit: BPF prog-id=4 op=UNLOAD May 10 00:40:12.853000 audit: BPF prog-id=3 op=UNLOAD May 10 00:40:12.872523 iscsid[728]: iscsid shutting down. May 10 00:40:12.873444 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). May 10 00:40:12.873507 systemd-journald[199]: Journal stopped May 10 00:40:17.479832 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:40:17.479914 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:40:17.479930 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:40:17.479944 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:40:17.479957 kernel: SELinux: policy capability open_perms=1 May 10 00:40:17.479970 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:40:17.479984 kernel: SELinux: policy capability always_check_network=0 May 10 00:40:17.479998 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:40:17.480020 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:40:17.480034 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:40:17.480052 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:40:17.480070 systemd[1]: Successfully loaded SELinux policy in 45.141ms. May 10 00:40:17.480090 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.254ms. May 10 00:40:17.480107 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:40:17.480124 systemd[1]: Detected virtualization kvm. May 10 00:40:17.480138 systemd[1]: Detected architecture x86-64. May 10 00:40:17.480152 systemd[1]: Detected first boot. May 10 00:40:17.480166 systemd[1]: Initializing machine ID from VM UUID. May 10 00:40:17.480179 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:40:17.480198 systemd[1]: Populated /etc with preset unit settings. May 10 00:40:17.480213 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:40:17.480233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:40:17.480250 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:40:17.480268 systemd[1]: Queued start job for default target multi-user.target. May 10 00:40:17.480283 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 10 00:40:17.480297 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:40:17.480313 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:40:17.480327 systemd[1]: Created slice system-getty.slice. May 10 00:40:17.480346 systemd[1]: Created slice system-modprobe.slice. May 10 00:40:17.480360 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:40:17.480405 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:40:17.480420 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:40:17.480435 systemd[1]: Created slice user.slice. May 10 00:40:17.480450 systemd[1]: Started systemd-ask-password-console.path. May 10 00:40:17.480465 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:40:17.480479 systemd[1]: Set up automount boot.automount. May 10 00:40:17.480494 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:40:17.480509 systemd[1]: Reached target integritysetup.target. May 10 00:40:17.480531 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:40:17.480576 systemd[1]: Reached target remote-fs.target. May 10 00:40:17.480595 systemd[1]: Reached target slices.target. May 10 00:40:17.480611 systemd[1]: Reached target swap.target. May 10 00:40:17.480627 systemd[1]: Reached target torcx.target. May 10 00:40:17.480643 systemd[1]: Reached target veritysetup.target. May 10 00:40:17.480659 systemd[1]: Listening on systemd-coredump.socket. May 10 00:40:17.480674 systemd[1]: Listening on systemd-initctl.socket. May 10 00:40:17.480689 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:40:17.480705 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:40:17.480732 systemd[1]: Listening on systemd-journald.socket. May 10 00:40:17.480748 systemd[1]: Listening on systemd-networkd.socket. May 10 00:40:17.480762 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:40:17.480776 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:40:17.480789 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:40:17.480802 systemd[1]: Mounting dev-hugepages.mount... May 10 00:40:17.480816 systemd[1]: Mounting dev-mqueue.mount... May 10 00:40:17.480829 systemd[1]: Mounting media.mount... May 10 00:40:17.480842 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:17.480866 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:40:17.480879 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:40:17.480892 systemd[1]: Mounting tmp.mount... May 10 00:40:17.480909 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:40:17.480923 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:40:17.480938 systemd[1]: Starting kmod-static-nodes.service... May 10 00:40:17.480952 systemd[1]: Starting modprobe@configfs.service... May 10 00:40:17.480966 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:40:17.480980 systemd[1]: Starting modprobe@drm.service... May 10 00:40:17.481003 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:40:17.481018 systemd[1]: Starting modprobe@fuse.service... May 10 00:40:17.481031 systemd[1]: Starting modprobe@loop.service... May 10 00:40:17.481047 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:40:17.481062 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 10 00:40:17.481076 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 10 00:40:17.481090 systemd[1]: Starting systemd-journald.service... May 10 00:40:17.481104 kernel: fuse: init (API version 7.34) May 10 00:40:17.481117 systemd[1]: Starting systemd-modules-load.service... May 10 00:40:17.481140 systemd[1]: Starting systemd-network-generator.service... May 10 00:40:17.481154 systemd[1]: Starting systemd-remount-fs.service... May 10 00:40:17.481167 kernel: loop: module loaded May 10 00:40:17.481181 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:40:17.481196 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:17.481210 systemd[1]: Mounted dev-hugepages.mount. May 10 00:40:17.481224 systemd[1]: Mounted dev-mqueue.mount. May 10 00:40:17.481238 systemd[1]: Mounted media.mount. May 10 00:40:17.481261 systemd-journald[1023]: Journal started May 10 00:40:17.481327 systemd-journald[1023]: Runtime Journal (/run/log/journal/28a8e9cfda3d46d28994fbf2691b43ce) is 6.0M, max 48.5M, 42.5M free. May 10 00:40:17.336000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:40:17.336000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 10 00:40:17.477000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:40:17.477000 audit[1023]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffede740b40 a2=4000 a3=7ffede740bdc items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:40:17.477000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:40:17.483416 systemd[1]: Started systemd-journald.service. May 10 00:40:17.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.484203 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:40:17.485115 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:40:17.486028 systemd[1]: Mounted tmp.mount. May 10 00:40:17.487196 systemd[1]: Finished kmod-static-nodes.service. May 10 00:40:17.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.488314 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:40:17.488775 systemd[1]: Finished modprobe@configfs.service. May 10 00:40:17.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.489927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:40:17.490173 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:40:17.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.491521 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:40:17.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.492664 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:40:17.492879 systemd[1]: Finished modprobe@drm.service. May 10 00:40:17.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.494171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:40:17.494391 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:40:17.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.495563 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:40:17.495816 systemd[1]: Finished modprobe@fuse.service. May 10 00:40:17.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.497046 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:40:17.497284 systemd[1]: Finished modprobe@loop.service. May 10 00:40:17.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.498675 systemd[1]: Finished systemd-modules-load.service. May 10 00:40:17.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.500108 systemd[1]: Finished systemd-network-generator.service. May 10 00:40:17.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.501858 systemd[1]: Finished systemd-remount-fs.service. May 10 00:40:17.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.503251 systemd[1]: Reached target network-pre.target. May 10 00:40:17.505444 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:40:17.507748 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:40:17.508604 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:40:17.510460 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:40:17.512919 systemd[1]: Starting systemd-journal-flush.service... May 10 00:40:17.516476 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:40:17.518497 systemd-journald[1023]: Time spent on flushing to /var/log/journal/28a8e9cfda3d46d28994fbf2691b43ce is 13.576ms for 1036 entries. May 10 00:40:17.518497 systemd-journald[1023]: System Journal (/var/log/journal/28a8e9cfda3d46d28994fbf2691b43ce) is 8.0M, max 195.6M, 187.6M free. May 10 00:40:17.539480 systemd-journald[1023]: Received client request to flush runtime journal. May 10 00:40:17.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.517946 systemd[1]: Starting systemd-random-seed.service... May 10 00:40:17.520237 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:40:17.521154 systemd[1]: Starting systemd-sysctl.service... May 10 00:40:17.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.523119 systemd[1]: Starting systemd-sysusers.service... May 10 00:40:17.525699 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:40:17.526879 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:40:17.530484 systemd[1]: Finished systemd-random-seed.service. May 10 00:40:17.531628 systemd[1]: Reached target first-boot-complete.target. May 10 00:40:17.541074 systemd[1]: Finished systemd-sysctl.service. May 10 00:40:17.542607 systemd[1]: Finished systemd-journal-flush.service. May 10 00:40:17.543224 kernel: kauditd_printk_skb: 70 callbacks suppressed May 10 00:40:17.543262 kernel: audit: type=1130 audit(1746837617.541:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.552515 kernel: audit: type=1130 audit(1746837617.547:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.553553 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:40:17.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.556225 systemd[1]: Starting systemd-udev-settle.service... May 10 00:40:17.559399 kernel: audit: type=1130 audit(1746837617.554:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.560844 systemd[1]: Finished systemd-sysusers.service. May 10 00:40:17.568437 kernel: audit: type=1130 audit(1746837617.561:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.563079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:40:17.572309 udevadm[1070]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 10 00:40:17.584568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:40:17.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.589400 kernel: audit: type=1130 audit(1746837617.585:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.997006 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:40:17.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:17.999186 systemd[1]: Starting systemd-udevd.service... May 10 00:40:18.002396 kernel: audit: type=1130 audit(1746837617.997:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.020612 systemd-udevd[1076]: Using default interface naming scheme 'v252'. May 10 00:40:18.035503 systemd[1]: Started systemd-udevd.service. May 10 00:40:18.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.039086 systemd[1]: Starting systemd-networkd.service... May 10 00:40:18.040396 kernel: audit: type=1130 audit(1746837618.036:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.045745 systemd[1]: Starting systemd-userdbd.service... May 10 00:40:18.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.084486 systemd[1]: Started systemd-userdbd.service. May 10 00:40:18.089391 kernel: audit: type=1130 audit(1746837618.085:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.093240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:40:18.099220 systemd[1]: Found device dev-ttyS0.device. May 10 00:40:18.121395 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 10 00:40:18.127385 kernel: ACPI: button: Power Button [PWRF] May 10 00:40:18.135984 systemd-networkd[1086]: lo: Link UP May 10 00:40:18.135996 systemd-networkd[1086]: lo: Gained carrier May 10 00:40:18.136603 systemd-networkd[1086]: Enumeration completed May 10 00:40:18.136731 systemd[1]: Started systemd-networkd.service. May 10 00:40:18.136766 systemd-networkd[1086]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:40:18.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.138811 systemd-networkd[1086]: eth0: Link UP May 10 00:40:18.138819 systemd-networkd[1086]: eth0: Gained carrier May 10 00:40:18.141415 kernel: audit: type=1130 audit(1746837618.137:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.141000 audit[1091]: AVC avc: denied { confidentiality } for pid=1091 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:40:18.156838 systemd-networkd[1086]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:40:18.157402 kernel: audit: type=1400 audit(1746837618.141:114): avc: denied { confidentiality } for pid=1091 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:40:18.141000 audit[1091]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5573876c7ef0 a1=338ac a2=7f86475bebc5 a3=5 items=110 ppid=1076 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:40:18.141000 audit: CWD cwd="/" May 10 00:40:18.141000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=1 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=2 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=3 name=(null) inode=12773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=4 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=5 name=(null) inode=12774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=6 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=7 name=(null) inode=12775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=8 name=(null) inode=12775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=9 name=(null) inode=12776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=10 name=(null) inode=12775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=11 name=(null) inode=12777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=12 name=(null) inode=12775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=13 name=(null) inode=12778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=14 name=(null) inode=12775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=15 name=(null) inode=12779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=16 name=(null) inode=12775 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=17 name=(null) inode=12780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=18 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=19 name=(null) inode=12781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=20 name=(null) inode=12781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=21 name=(null) inode=12782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=22 name=(null) inode=12781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=23 name=(null) inode=12783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=24 name=(null) inode=12781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=25 name=(null) inode=12784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=26 name=(null) inode=12781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=27 name=(null) inode=12785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=28 name=(null) inode=12781 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=29 name=(null) inode=12786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=30 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=31 name=(null) inode=12787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=32 name=(null) inode=12787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=33 name=(null) inode=12788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=34 name=(null) inode=12787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=35 name=(null) inode=12789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=36 name=(null) inode=12787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=37 name=(null) inode=12790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=38 name=(null) inode=12787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=39 name=(null) inode=12791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=40 name=(null) inode=12787 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=41 name=(null) inode=12792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=42 name=(null) inode=12772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=43 name=(null) inode=12793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=44 name=(null) inode=12793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=45 name=(null) inode=12794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=46 name=(null) inode=12793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=47 name=(null) inode=12795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=48 name=(null) inode=12793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=49 name=(null) inode=12796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=50 name=(null) inode=12793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=51 name=(null) inode=12797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=52 name=(null) inode=12793 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=53 name=(null) inode=12798 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=55 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=56 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=57 name=(null) inode=12800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=58 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=59 name=(null) inode=12801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=60 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=61 name=(null) inode=12802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=62 name=(null) inode=12802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=63 name=(null) inode=12803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=64 name=(null) inode=12802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=65 name=(null) inode=12804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=66 name=(null) inode=12802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=67 name=(null) inode=12805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=68 name=(null) inode=12802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=69 name=(null) inode=12806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=70 name=(null) inode=12802 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=71 name=(null) inode=12807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=72 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=73 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=74 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=75 name=(null) inode=12809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=76 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=77 name=(null) inode=12810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=78 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=79 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=80 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=81 name=(null) inode=12812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=82 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=83 name=(null) inode=12813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=84 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=85 name=(null) inode=12814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=86 name=(null) inode=12814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=87 name=(null) inode=12815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=88 name=(null) inode=12814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=89 name=(null) inode=12816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=90 name=(null) inode=12814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=91 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=92 name=(null) inode=12814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=93 name=(null) inode=12818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=94 name=(null) inode=12814 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=95 name=(null) inode=12819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=96 name=(null) inode=12799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=97 name=(null) inode=12820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=98 name=(null) inode=12820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=99 name=(null) inode=12821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=100 name=(null) inode=12820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=101 name=(null) inode=12822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=102 name=(null) inode=12820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=103 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=104 name=(null) inode=12820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=105 name=(null) inode=12824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=106 name=(null) inode=12820 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=107 name=(null) inode=12825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PATH item=109 name=(null) inode=12826 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:40:18.141000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:40:18.176404 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 00:40:18.179549 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 00:40:18.179748 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 00:40:18.192397 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 10 00:40:18.198399 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:40:18.223414 kernel: kvm: Nested Virtualization enabled May 10 00:40:18.223610 kernel: SVM: kvm: Nested Paging enabled May 10 00:40:18.223644 kernel: SVM: Virtual VMLOAD VMSAVE supported May 10 00:40:18.223678 kernel: SVM: Virtual GIF supported May 10 00:40:18.242672 kernel: EDAC MC: Ver: 3.0.0 May 10 00:40:18.265823 systemd[1]: Finished systemd-udev-settle.service. May 10 00:40:18.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.267979 systemd[1]: Starting lvm2-activation-early.service... May 10 00:40:18.275511 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:40:18.304534 systemd[1]: Finished lvm2-activation-early.service. May 10 00:40:18.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.305670 systemd[1]: Reached target cryptsetup.target. May 10 00:40:18.307783 systemd[1]: Starting lvm2-activation.service... May 10 00:40:18.311007 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:40:18.337535 systemd[1]: Finished lvm2-activation.service. May 10 00:40:18.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.338622 systemd[1]: Reached target local-fs-pre.target. May 10 00:40:18.339534 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:40:18.339557 systemd[1]: Reached target local-fs.target. May 10 00:40:18.340382 systemd[1]: Reached target machines.target. May 10 00:40:18.342637 systemd[1]: Starting ldconfig.service... May 10 00:40:18.343883 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:40:18.343922 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:18.344890 systemd[1]: Starting systemd-boot-update.service... May 10 00:40:18.347212 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:40:18.349605 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:40:18.352626 systemd[1]: Starting systemd-sysext.service... May 10 00:40:18.354014 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1117 (bootctl) May 10 00:40:18.355063 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:40:18.359840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:40:18.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.365723 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:40:18.369824 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:40:18.370037 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:40:18.381407 kernel: loop0: detected capacity change from 0 to 210664 May 10 00:40:18.396640 systemd-fsck[1126]: fsck.fat 4.2 (2021-01-31) May 10 00:40:18.396640 systemd-fsck[1126]: /dev/vda1: 790 files, 120688/258078 clusters May 10 00:40:18.399083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:40:18.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.402279 systemd[1]: Mounting boot.mount... May 10 00:40:18.562700 systemd[1]: Mounted boot.mount. May 10 00:40:18.571416 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:40:18.594543 systemd[1]: Finished systemd-boot-update.service. May 10 00:40:18.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.605399 kernel: loop1: detected capacity change from 0 to 210664 May 10 00:40:18.614540 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:40:18.615397 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:40:18.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.617308 (sd-sysext)[1137]: Using extensions 'kubernetes'. May 10 00:40:18.618436 (sd-sysext)[1137]: Merged extensions into '/usr'. May 10 00:40:18.635961 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.637869 systemd[1]: Mounting usr-share-oem.mount... May 10 00:40:18.639054 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:40:18.640243 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:40:18.642501 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:40:18.644586 systemd[1]: Starting modprobe@loop.service... May 10 00:40:18.645612 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:40:18.645830 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:18.645976 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.649718 systemd[1]: Mounted usr-share-oem.mount. May 10 00:40:18.651298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:40:18.651517 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:40:18.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.653194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:40:18.653342 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:40:18.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.654898 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:40:18.655091 systemd[1]: Finished modprobe@loop.service. May 10 00:40:18.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.656705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:40:18.656796 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:40:18.657699 systemd[1]: Finished systemd-sysext.service. May 10 00:40:18.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.660098 systemd[1]: Starting ensure-sysext.service... May 10 00:40:18.662005 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:40:18.666978 systemd[1]: Reloading. May 10 00:40:18.668351 ldconfig[1116]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:40:18.672830 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:40:18.693627 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:40:18.695825 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:40:18.725154 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-05-10T00:40:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:40:18.725576 /usr/lib/systemd/system-generators/torcx-generator[1172]: time="2025-05-10T00:40:18Z" level=info msg="torcx already run" May 10 00:40:18.803466 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:40:18.803493 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:40:18.826362 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:40:18.886394 systemd[1]: Finished ldconfig.service. May 10 00:40:18.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.888457 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:40:18.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.891590 systemd[1]: Starting audit-rules.service... May 10 00:40:18.893300 systemd[1]: Starting clean-ca-certificates.service... May 10 00:40:18.895287 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:40:18.897846 systemd[1]: Starting systemd-resolved.service... May 10 00:40:18.900637 systemd[1]: Starting systemd-timesyncd.service... May 10 00:40:18.902831 systemd[1]: Starting systemd-update-utmp.service... May 10 00:40:18.904690 systemd[1]: Finished clean-ca-certificates.service. May 10 00:40:18.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.909000 audit[1233]: SYSTEM_BOOT pid=1233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:40:18.913315 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.913695 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:40:18.915982 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:40:18.918349 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:40:18.921103 systemd[1]: Starting modprobe@loop.service... May 10 00:40:18.922845 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:40:18.923004 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:18.923467 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:40:18.923629 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.925528 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:40:18.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.927254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:40:18.927413 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:40:18.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.929024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:40:18.929192 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:40:18.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.930865 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:40:18.931233 systemd[1]: Finished modprobe@loop.service. May 10 00:40:18.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:40:18.934000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:40:18.934000 audit[1250]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff55a7c970 a2=420 a3=0 items=0 ppid=1221 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:40:18.934000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:40:18.932969 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:40:18.936481 augenrules[1250]: No rules May 10 00:40:18.933104 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:40:18.934646 systemd[1]: Starting systemd-update-done.service... May 10 00:40:18.938611 systemd[1]: Finished audit-rules.service. May 10 00:40:18.940559 systemd[1]: Finished systemd-update-utmp.service. May 10 00:40:18.943704 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.943912 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:40:18.945978 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:40:18.948315 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:40:18.950751 systemd[1]: Starting modprobe@loop.service... May 10 00:40:18.951760 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:40:18.951899 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:18.952022 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:40:18.952113 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.953301 systemd[1]: Finished systemd-update-done.service. May 10 00:40:18.956127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:40:18.956343 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:40:18.957900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:40:18.958092 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:40:18.959829 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:40:18.959990 systemd[1]: Finished modprobe@loop.service. May 10 00:40:18.961535 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:40:18.961686 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:40:18.965185 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.965475 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:40:18.967494 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:40:18.970076 systemd[1]: Starting modprobe@drm.service... May 10 00:40:18.972854 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:40:18.975284 systemd[1]: Starting modprobe@loop.service... May 10 00:40:18.976484 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:40:18.976743 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:18.979728 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:40:18.980913 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:40:18.981039 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:40:18.982138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:40:18.982335 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:40:18.983812 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:40:18.984018 systemd[1]: Finished modprobe@drm.service. May 10 00:40:18.985240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:40:18.985439 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:40:18.987079 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:40:18.987328 systemd[1]: Finished modprobe@loop.service. May 10 00:40:18.988941 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:40:18.989027 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:40:18.991083 systemd[1]: Finished ensure-sysext.service. May 10 00:40:19.001031 systemd[1]: Started systemd-timesyncd.service. May 10 00:40:19.001620 systemd-resolved[1228]: Positive Trust Anchors: May 10 00:40:19.001643 systemd-resolved[1228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:40:19.001677 systemd-resolved[1228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:40:19.002442 systemd[1]: Reached target time-set.target. May 10 00:40:19.004445 systemd-timesyncd[1229]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 10 00:40:19.004944 systemd-timesyncd[1229]: Initial clock synchronization to Sat 2025-05-10 00:40:19.066439 UTC. May 10 00:40:19.011512 systemd-resolved[1228]: Defaulting to hostname 'linux'. May 10 00:40:19.013442 systemd[1]: Started systemd-resolved.service. May 10 00:40:19.014685 systemd[1]: Reached target network.target. May 10 00:40:19.015683 systemd[1]: Reached target nss-lookup.target. May 10 00:40:19.016696 systemd[1]: Reached target sysinit.target. May 10 00:40:19.017764 systemd[1]: Started motdgen.path. May 10 00:40:19.018552 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:40:19.019989 systemd[1]: Started logrotate.timer. May 10 00:40:19.020928 systemd[1]: Started mdadm.timer. May 10 00:40:19.021745 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:40:19.022799 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:40:19.022832 systemd[1]: Reached target paths.target. May 10 00:40:19.023755 systemd[1]: Reached target timers.target. May 10 00:40:19.025085 systemd[1]: Listening on dbus.socket. May 10 00:40:19.027573 systemd[1]: Starting docker.socket... May 10 00:40:19.029584 systemd[1]: Listening on sshd.socket. May 10 00:40:19.030605 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:19.030955 systemd[1]: Listening on docker.socket. May 10 00:40:19.031918 systemd[1]: Reached target sockets.target. May 10 00:40:19.032909 systemd[1]: Reached target basic.target. May 10 00:40:19.034001 systemd[1]: System is tainted: cgroupsv1 May 10 00:40:19.034050 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:40:19.034069 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:40:19.035532 systemd[1]: Starting containerd.service... May 10 00:40:19.037528 systemd[1]: Starting dbus.service... May 10 00:40:19.039482 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:40:19.041955 systemd[1]: Starting extend-filesystems.service... May 10 00:40:19.043037 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:40:19.044234 systemd[1]: Starting motdgen.service... May 10 00:40:19.044885 jq[1284]: false May 10 00:40:19.046206 systemd[1]: Starting prepare-helm.service... May 10 00:40:19.048719 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:40:19.051052 systemd[1]: Starting sshd-keygen.service... May 10 00:40:19.054005 systemd[1]: Starting systemd-logind.service... May 10 00:40:19.054909 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:40:19.054975 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:40:19.056168 systemd[1]: Starting update-engine.service... May 10 00:40:19.058392 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:40:19.061776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:40:19.062931 jq[1301]: true May 10 00:40:19.062052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:40:19.073409 jq[1305]: true May 10 00:40:19.074540 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:40:19.075245 tar[1304]: linux-amd64/helm May 10 00:40:19.074830 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:40:19.087593 dbus-daemon[1283]: [system] SELinux support is enabled May 10 00:40:19.087923 systemd[1]: Started dbus.service. May 10 00:40:19.091427 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:40:19.091766 systemd[1]: Finished motdgen.service. May 10 00:40:19.092968 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:40:19.092998 systemd[1]: Reached target system-config.target. May 10 00:40:19.096643 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:40:19.096675 systemd[1]: Reached target user-config.target. May 10 00:40:19.096861 extend-filesystems[1285]: Found loop1 May 10 00:40:19.098955 extend-filesystems[1285]: Found sr0 May 10 00:40:19.098955 extend-filesystems[1285]: Found vda May 10 00:40:19.098955 extend-filesystems[1285]: Found vda1 May 10 00:40:19.098955 extend-filesystems[1285]: Found vda2 May 10 00:40:19.098955 extend-filesystems[1285]: Found vda3 May 10 00:40:19.098955 extend-filesystems[1285]: Found usr May 10 00:40:19.098955 extend-filesystems[1285]: Found vda4 May 10 00:40:19.098955 extend-filesystems[1285]: Found vda6 May 10 00:40:19.098955 extend-filesystems[1285]: Found vda7 May 10 00:40:19.098955 extend-filesystems[1285]: Found vda9 May 10 00:40:19.098955 extend-filesystems[1285]: Checking size of /dev/vda9 May 10 00:40:19.116951 extend-filesystems[1285]: Resized partition /dev/vda9 May 10 00:40:19.109531 systemd[1]: Started update-engine.service. May 10 00:40:19.118642 update_engine[1300]: I0510 00:40:19.106189 1300 main.cc:92] Flatcar Update Engine starting May 10 00:40:19.118642 update_engine[1300]: I0510 00:40:19.109570 1300 update_check_scheduler.cc:74] Next update check in 5m16s May 10 00:40:19.112334 systemd[1]: Started locksmithd.service. May 10 00:40:19.129036 extend-filesystems[1337]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:40:19.133361 env[1307]: time="2025-05-10T00:40:19.133283158Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:40:19.137388 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 10 00:40:19.141766 systemd-logind[1299]: Watching system buttons on /dev/input/event1 (Power Button) May 10 00:40:19.142117 systemd-logind[1299]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:40:19.142440 systemd-logind[1299]: New seat seat0. May 10 00:40:19.145446 systemd[1]: Started systemd-logind.service. May 10 00:40:19.156833 env[1307]: time="2025-05-10T00:40:19.153780014Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:40:19.156942 env[1307]: time="2025-05-10T00:40:19.156904755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:40:19.158734 env[1307]: time="2025-05-10T00:40:19.158686507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:40:19.158734 env[1307]: time="2025-05-10T00:40:19.158729618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:40:19.159034 env[1307]: time="2025-05-10T00:40:19.158998913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:40:19.159092 env[1307]: time="2025-05-10T00:40:19.159034149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:40:19.159092 env[1307]: time="2025-05-10T00:40:19.159053566Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:40:19.159092 env[1307]: time="2025-05-10T00:40:19.159066530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:40:19.159192 env[1307]: time="2025-05-10T00:40:19.159163402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:40:19.159565 env[1307]: time="2025-05-10T00:40:19.159533316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:40:19.159807 env[1307]: time="2025-05-10T00:40:19.159763748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:40:19.159807 env[1307]: time="2025-05-10T00:40:19.159796098Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:40:19.159873 env[1307]: time="2025-05-10T00:40:19.159859347Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:40:19.159906 env[1307]: time="2025-05-10T00:40:19.159875758Z" level=info msg="metadata content store policy set" policy=shared May 10 00:40:19.167401 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 10 00:40:19.196400 extend-filesystems[1337]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 00:40:19.196400 extend-filesystems[1337]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 00:40:19.196400 extend-filesystems[1337]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 10 00:40:19.201168 extend-filesystems[1285]: Resized filesystem in /dev/vda9 May 10 00:40:19.202464 bash[1341]: Updated "/home/core/.ssh/authorized_keys" May 10 00:40:19.202893 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:40:19.203227 systemd[1]: Finished extend-filesystems.service. May 10 00:40:19.205094 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:40:19.207597 env[1307]: time="2025-05-10T00:40:19.207506690Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:40:19.207597 env[1307]: time="2025-05-10T00:40:19.207566402Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:40:19.207597 env[1307]: time="2025-05-10T00:40:19.207588714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:40:19.207713 env[1307]: time="2025-05-10T00:40:19.207658705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207713 env[1307]: time="2025-05-10T00:40:19.207684183Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207713 env[1307]: time="2025-05-10T00:40:19.207704110Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207813 env[1307]: time="2025-05-10T00:40:19.207720491Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207813 env[1307]: time="2025-05-10T00:40:19.207743224Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207813 env[1307]: time="2025-05-10T00:40:19.207761598Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207813 env[1307]: time="2025-05-10T00:40:19.207780083Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207813 env[1307]: time="2025-05-10T00:40:19.207797846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:40:19.207948 env[1307]: time="2025-05-10T00:40:19.207819487Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:40:19.207989 env[1307]: time="2025-05-10T00:40:19.207956524Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:40:19.208103 env[1307]: time="2025-05-10T00:40:19.208073353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:40:19.208510 env[1307]: time="2025-05-10T00:40:19.208465488Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:40:19.208563 env[1307]: time="2025-05-10T00:40:19.208519259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208563 env[1307]: time="2025-05-10T00:40:19.208539116Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:40:19.208623 env[1307]: time="2025-05-10T00:40:19.208602575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208668 env[1307]: time="2025-05-10T00:40:19.208621962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208668 env[1307]: time="2025-05-10T00:40:19.208641048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208668 env[1307]: time="2025-05-10T00:40:19.208660204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208757 env[1307]: time="2025-05-10T00:40:19.208676324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208757 env[1307]: time="2025-05-10T00:40:19.208692835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208757 env[1307]: time="2025-05-10T00:40:19.208707352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208757 env[1307]: time="2025-05-10T00:40:19.208721729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208757 env[1307]: time="2025-05-10T00:40:19.208738861Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:40:19.208937 env[1307]: time="2025-05-10T00:40:19.208874966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208937 env[1307]: time="2025-05-10T00:40:19.208894974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208937 env[1307]: time="2025-05-10T00:40:19.208909631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:40:19.208937 env[1307]: time="2025-05-10T00:40:19.208924970Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:40:19.209062 env[1307]: time="2025-05-10T00:40:19.208944547Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:40:19.209062 env[1307]: time="2025-05-10T00:40:19.208962200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:40:19.209062 env[1307]: time="2025-05-10T00:40:19.209004950Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:40:19.209062 env[1307]: time="2025-05-10T00:40:19.209052920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:40:19.209451 env[1307]: time="2025-05-10T00:40:19.209360297Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:40:19.210360 env[1307]: time="2025-05-10T00:40:19.209460505Z" level=info msg="Connect containerd service" May 10 00:40:19.210360 env[1307]: time="2025-05-10T00:40:19.209538180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:40:19.210360 env[1307]: time="2025-05-10T00:40:19.210186987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:40:19.210779 env[1307]: time="2025-05-10T00:40:19.210708756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:40:19.210779 env[1307]: time="2025-05-10T00:40:19.210704308Z" level=info msg="Start subscribing containerd event" May 10 00:40:19.210779 env[1307]: time="2025-05-10T00:40:19.210765272Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:40:19.210779 env[1307]: time="2025-05-10T00:40:19.210778116Z" level=info msg="Start recovering state" May 10 00:40:19.210922 env[1307]: time="2025-05-10T00:40:19.210863136Z" level=info msg="Start event monitor" May 10 00:40:19.210922 env[1307]: time="2025-05-10T00:40:19.210897410Z" level=info msg="Start snapshots syncer" May 10 00:40:19.210922 env[1307]: time="2025-05-10T00:40:19.210912448Z" level=info msg="Start cni network conf syncer for default" May 10 00:40:19.210922 env[1307]: time="2025-05-10T00:40:19.210921535Z" level=info msg="Start streaming server" May 10 00:40:19.210918 systemd[1]: Started containerd.service. May 10 00:40:19.211984 env[1307]: time="2025-05-10T00:40:19.211646395Z" level=info msg="containerd successfully booted in 0.079562s" May 10 00:40:19.216660 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:40:19.302541 systemd-networkd[1086]: eth0: Gained IPv6LL May 10 00:40:19.304661 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:40:19.306170 systemd[1]: Reached target network-online.target. May 10 00:40:19.309042 systemd[1]: Starting kubelet.service... May 10 00:40:19.533460 tar[1304]: linux-amd64/LICENSE May 10 00:40:19.533728 tar[1304]: linux-amd64/README.md May 10 00:40:19.538930 systemd[1]: Finished prepare-helm.service. May 10 00:40:19.915335 systemd[1]: Started kubelet.service. May 10 00:40:19.964832 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:40:19.988057 systemd[1]: Finished sshd-keygen.service. May 10 00:40:19.990633 systemd[1]: Starting issuegen.service... May 10 00:40:19.997846 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:40:19.998121 systemd[1]: Finished issuegen.service. May 10 00:40:20.020131 systemd[1]: Starting systemd-user-sessions.service... May 10 00:40:20.026017 systemd[1]: Finished systemd-user-sessions.service. May 10 00:40:20.028561 systemd[1]: Started getty@tty1.service. May 10 00:40:20.030657 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:40:20.031854 systemd[1]: Reached target getty.target. May 10 00:40:20.032822 systemd[1]: Reached target multi-user.target. May 10 00:40:20.034952 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:40:20.044161 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:40:20.044432 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:40:20.047682 systemd[1]: Startup finished in 6.892s (kernel) + 7.104s (userspace) = 13.997s. May 10 00:40:20.395562 kubelet[1369]: E0510 00:40:20.395393 1369 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:40:20.397476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:40:20.397663 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:40:20.533357 systemd[1]: Created slice system-sshd.slice. May 10 00:40:20.534621 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:55366.service. May 10 00:40:20.581953 sshd[1396]: Accepted publickey for core from 10.0.0.1 port 55366 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:40:20.584019 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:40:20.594021 systemd-logind[1299]: New session 1 of user core. May 10 00:40:20.594989 systemd[1]: Created slice user-500.slice. May 10 00:40:20.596202 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:40:20.605794 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:40:20.607182 systemd[1]: Starting user@500.service... May 10 00:40:20.610616 (systemd)[1401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:40:20.686885 systemd[1401]: Queued start job for default target default.target. May 10 00:40:20.687118 systemd[1401]: Reached target paths.target. May 10 00:40:20.687134 systemd[1401]: Reached target sockets.target. May 10 00:40:20.687147 systemd[1401]: Reached target timers.target. May 10 00:40:20.687158 systemd[1401]: Reached target basic.target. May 10 00:40:20.687202 systemd[1401]: Reached target default.target. May 10 00:40:20.687226 systemd[1401]: Startup finished in 70ms. May 10 00:40:20.687391 systemd[1]: Started user@500.service. May 10 00:40:20.688599 systemd[1]: Started session-1.scope. May 10 00:40:20.739206 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:55372.service. May 10 00:40:20.780769 sshd[1410]: Accepted publickey for core from 10.0.0.1 port 55372 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:40:20.782209 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:40:20.786228 systemd-logind[1299]: New session 2 of user core. May 10 00:40:20.787201 systemd[1]: Started session-2.scope. May 10 00:40:20.844212 sshd[1410]: pam_unix(sshd:session): session closed for user core May 10 00:40:20.847573 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:55378.service. May 10 00:40:20.848278 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:55372.service: Deactivated successfully. May 10 00:40:20.849216 systemd-logind[1299]: Session 2 logged out. Waiting for processes to exit. May 10 00:40:20.849297 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:40:20.850577 systemd-logind[1299]: Removed session 2. May 10 00:40:20.889140 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 55378 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:40:20.890776 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:40:20.894964 systemd-logind[1299]: New session 3 of user core. May 10 00:40:20.895724 systemd[1]: Started session-3.scope. May 10 00:40:20.946523 sshd[1416]: pam_unix(sshd:session): session closed for user core May 10 00:40:20.949045 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:55394.service. May 10 00:40:20.950554 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:55378.service: Deactivated successfully. May 10 00:40:20.951310 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:40:20.951344 systemd-logind[1299]: Session 3 logged out. Waiting for processes to exit. May 10 00:40:20.952213 systemd-logind[1299]: Removed session 3. May 10 00:40:20.987965 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 55394 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:40:20.989254 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:40:20.993344 systemd-logind[1299]: New session 4 of user core. May 10 00:40:20.994416 systemd[1]: Started session-4.scope. May 10 00:40:21.048688 sshd[1422]: pam_unix(sshd:session): session closed for user core May 10 00:40:21.053713 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:55394.service: Deactivated successfully. May 10 00:40:21.054507 systemd-logind[1299]: Session 4 logged out. Waiting for processes to exit. May 10 00:40:21.055702 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:55408.service. May 10 00:40:21.055997 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:40:21.056876 systemd-logind[1299]: Removed session 4. May 10 00:40:21.097128 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:40:21.098409 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:40:21.102645 systemd-logind[1299]: New session 5 of user core. May 10 00:40:21.103721 systemd[1]: Started session-5.scope. May 10 00:40:21.164140 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:40:21.164487 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:40:21.194599 systemd[1]: Starting docker.service... May 10 00:40:21.235247 env[1447]: time="2025-05-10T00:40:21.235101397Z" level=info msg="Starting up" May 10 00:40:21.236334 env[1447]: time="2025-05-10T00:40:21.236309652Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:40:21.236334 env[1447]: time="2025-05-10T00:40:21.236328464Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:40:21.236421 env[1447]: time="2025-05-10T00:40:21.236345975Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:40:21.236421 env[1447]: time="2025-05-10T00:40:21.236354993Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:40:21.238042 env[1447]: time="2025-05-10T00:40:21.238020222Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:40:21.238042 env[1447]: time="2025-05-10T00:40:21.238035949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:40:21.238111 env[1447]: time="2025-05-10T00:40:21.238046348Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:40:21.238111 env[1447]: time="2025-05-10T00:40:21.238053045Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:40:21.243592 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2841865118-merged.mount: Deactivated successfully. May 10 00:40:21.844309 env[1447]: time="2025-05-10T00:40:21.844250447Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 10 00:40:21.844309 env[1447]: time="2025-05-10T00:40:21.844278439Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 10 00:40:21.844540 env[1447]: time="2025-05-10T00:40:21.844450036Z" level=info msg="Loading containers: start." May 10 00:40:21.994403 kernel: Initializing XFRM netlink socket May 10 00:40:22.022877 env[1447]: time="2025-05-10T00:40:22.022821327Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:40:22.078155 systemd-networkd[1086]: docker0: Link UP May 10 00:40:22.099402 env[1447]: time="2025-05-10T00:40:22.099246944Z" level=info msg="Loading containers: done." May 10 00:40:22.113479 env[1447]: time="2025-05-10T00:40:22.113411998Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:40:22.113696 env[1447]: time="2025-05-10T00:40:22.113646774Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:40:22.113807 env[1447]: time="2025-05-10T00:40:22.113780674Z" level=info msg="Daemon has completed initialization" May 10 00:40:22.135482 systemd[1]: Started docker.service. May 10 00:40:22.139179 env[1447]: time="2025-05-10T00:40:22.139100831Z" level=info msg="API listen on /run/docker.sock" May 10 00:40:22.927899 env[1307]: time="2025-05-10T00:40:22.927849607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:40:24.590147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923134217.mount: Deactivated successfully. May 10 00:40:27.588513 env[1307]: time="2025-05-10T00:40:27.588413331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:27.591712 env[1307]: time="2025-05-10T00:40:27.591648295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:27.594133 env[1307]: time="2025-05-10T00:40:27.594075533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:27.596127 env[1307]: time="2025-05-10T00:40:27.596093812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:27.596986 env[1307]: time="2025-05-10T00:40:27.596942107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 10 00:40:27.614057 env[1307]: time="2025-05-10T00:40:27.614021385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:40:29.986479 env[1307]: time="2025-05-10T00:40:29.985813299Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:29.990596 env[1307]: time="2025-05-10T00:40:29.990504584Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:30.000965 env[1307]: time="2025-05-10T00:40:30.000880762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:30.036010 env[1307]: time="2025-05-10T00:40:30.035939707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:30.036867 env[1307]: time="2025-05-10T00:40:30.036824798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 10 00:40:30.047716 env[1307]: time="2025-05-10T00:40:30.047667417Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:40:30.597973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:40:30.598223 systemd[1]: Stopped kubelet.service. May 10 00:40:30.599837 systemd[1]: Starting kubelet.service... May 10 00:40:30.686432 systemd[1]: Started kubelet.service. May 10 00:40:31.059623 kubelet[1607]: E0510 00:40:31.059466 1607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:40:31.062921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:40:31.063072 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:40:33.096191 env[1307]: time="2025-05-10T00:40:33.096108479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:33.100126 env[1307]: time="2025-05-10T00:40:33.100060362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:33.102093 env[1307]: time="2025-05-10T00:40:33.102060221Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:33.104135 env[1307]: time="2025-05-10T00:40:33.104094611Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:33.104888 env[1307]: time="2025-05-10T00:40:33.104846879Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 10 00:40:33.115690 env[1307]: time="2025-05-10T00:40:33.115637327Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:40:34.683124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676187965.mount: Deactivated successfully. May 10 00:40:35.794352 env[1307]: time="2025-05-10T00:40:35.794274697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:35.816322 env[1307]: time="2025-05-10T00:40:35.816246975Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:35.844519 env[1307]: time="2025-05-10T00:40:35.844446105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:35.846919 env[1307]: time="2025-05-10T00:40:35.846865591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:35.847564 env[1307]: time="2025-05-10T00:40:35.847535688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 00:40:35.856517 env[1307]: time="2025-05-10T00:40:35.856461455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:40:36.464739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380901526.mount: Deactivated successfully. May 10 00:40:39.312052 env[1307]: time="2025-05-10T00:40:39.311974665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.316117 env[1307]: time="2025-05-10T00:40:39.316020432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.318654 env[1307]: time="2025-05-10T00:40:39.318604773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.320712 env[1307]: time="2025-05-10T00:40:39.320663907Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.321506 env[1307]: time="2025-05-10T00:40:39.321466844Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:40:39.339504 env[1307]: time="2025-05-10T00:40:39.339457100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:40:39.831111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620760950.mount: Deactivated successfully. May 10 00:40:39.836325 env[1307]: time="2025-05-10T00:40:39.836286149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.838186 env[1307]: time="2025-05-10T00:40:39.838117736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.839902 env[1307]: time="2025-05-10T00:40:39.839865335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.841654 env[1307]: time="2025-05-10T00:40:39.841608553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:39.842056 env[1307]: time="2025-05-10T00:40:39.842019235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 10 00:40:39.855580 env[1307]: time="2025-05-10T00:40:39.855530545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:40:40.982576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443745473.mount: Deactivated successfully. May 10 00:40:41.097873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:40:41.098112 systemd[1]: Stopped kubelet.service. May 10 00:40:41.120177 systemd[1]: Starting kubelet.service... May 10 00:40:41.215971 systemd[1]: Started kubelet.service. May 10 00:40:41.399030 kubelet[1652]: E0510 00:40:41.398969 1652 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:40:41.403199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:40:41.403409 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:40:44.006773 env[1307]: time="2025-05-10T00:40:44.006687983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:44.008855 env[1307]: time="2025-05-10T00:40:44.008824783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:44.014026 env[1307]: time="2025-05-10T00:40:44.013973218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:44.015187 env[1307]: time="2025-05-10T00:40:44.015115219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 10 00:40:44.016180 env[1307]: time="2025-05-10T00:40:44.016142148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:46.341929 systemd[1]: Stopped kubelet.service. May 10 00:40:46.344096 systemd[1]: Starting kubelet.service... May 10 00:40:46.359940 systemd[1]: Reloading. May 10 00:40:46.414896 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2025-05-10T00:40:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:40:46.414935 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2025-05-10T00:40:46Z" level=info msg="torcx already run" May 10 00:40:46.602631 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:40:46.602649 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:40:46.627124 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:40:46.702315 systemd[1]: Started kubelet.service. May 10 00:40:46.703817 systemd[1]: Stopping kubelet.service... May 10 00:40:46.704107 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:40:46.704301 systemd[1]: Stopped kubelet.service. May 10 00:40:46.705789 systemd[1]: Starting kubelet.service... May 10 00:40:46.781814 systemd[1]: Started kubelet.service. May 10 00:40:46.823406 kubelet[1825]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:40:46.823406 kubelet[1825]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:40:46.823406 kubelet[1825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:40:46.826726 kubelet[1825]: I0510 00:40:46.826691 1825 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:40:47.078509 kubelet[1825]: I0510 00:40:47.078400 1825 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:40:47.078509 kubelet[1825]: I0510 00:40:47.078434 1825 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:40:47.078684 kubelet[1825]: I0510 00:40:47.078628 1825 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:40:47.094451 kubelet[1825]: I0510 00:40:47.094411 1825 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:40:47.095461 kubelet[1825]: E0510 00:40:47.095434 1825 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.106453 kubelet[1825]: I0510 00:40:47.106408 1825 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:40:47.106775 kubelet[1825]: I0510 00:40:47.106737 1825 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:40:47.106931 kubelet[1825]: I0510 00:40:47.106767 1825 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:40:47.107525 kubelet[1825]: I0510 00:40:47.107504 1825 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:40:47.107525 kubelet[1825]: I0510 00:40:47.107521 1825 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:40:47.107649 kubelet[1825]: I0510 00:40:47.107630 1825 state_mem.go:36] "Initialized new in-memory state store" May 10 00:40:47.108467 kubelet[1825]: I0510 00:40:47.108447 1825 kubelet.go:400] "Attempting to sync node with API server" May 10 00:40:47.108467 kubelet[1825]: I0510 00:40:47.108465 1825 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:40:47.108522 kubelet[1825]: I0510 00:40:47.108483 1825 kubelet.go:312] "Adding apiserver pod source" May 10 00:40:47.108522 kubelet[1825]: I0510 00:40:47.108498 1825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:40:47.114884 kubelet[1825]: I0510 00:40:47.114860 1825 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:40:47.122388 kubelet[1825]: I0510 00:40:47.122335 1825 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:40:47.122388 kubelet[1825]: W0510 00:40:47.122396 1825 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:40:47.122843 kubelet[1825]: I0510 00:40:47.122821 1825 server.go:1264] "Started kubelet" May 10 00:40:47.127794 kubelet[1825]: I0510 00:40:47.127760 1825 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:40:47.128663 kubelet[1825]: I0510 00:40:47.128632 1825 server.go:455] "Adding debug handlers to kubelet server" May 10 00:40:47.130290 kubelet[1825]: W0510 00:40:47.130232 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.130290 kubelet[1825]: E0510 00:40:47.130296 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.137874 kubelet[1825]: W0510 00:40:47.137826 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.137874 kubelet[1825]: E0510 00:40:47.137875 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.138011 kubelet[1825]: I0510 00:40:47.137938 1825 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:40:47.138184 kubelet[1825]: I0510 00:40:47.138166 1825 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:40:47.140815 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:40:47.140915 kubelet[1825]: I0510 00:40:47.140894 1825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:40:47.141103 kubelet[1825]: I0510 00:40:47.141084 1825 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:40:47.141267 kubelet[1825]: I0510 00:40:47.141250 1825 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:40:47.141423 kubelet[1825]: I0510 00:40:47.141409 1825 reconciler.go:26] "Reconciler: start to sync state" May 10 00:40:47.141850 kubelet[1825]: W0510 00:40:47.141810 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.141921 kubelet[1825]: E0510 00:40:47.141856 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.142053 kubelet[1825]: E0510 00:40:47.141966 1825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e0395bd08b2c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 00:40:47.122805444 +0000 UTC m=+0.337368799,LastTimestamp:2025-05-10 00:40:47.122805444 +0000 UTC m=+0.337368799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 00:40:47.142687 kubelet[1825]: I0510 00:40:47.142657 1825 factory.go:221] Registration of the systemd container factory successfully May 10 00:40:47.142812 kubelet[1825]: I0510 00:40:47.142790 1825 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:40:47.143082 kubelet[1825]: E0510 00:40:47.142858 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="200ms" May 10 00:40:47.143328 kubelet[1825]: E0510 00:40:47.143292 1825 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:40:47.143740 kubelet[1825]: I0510 00:40:47.143724 1825 factory.go:221] Registration of the containerd container factory successfully May 10 00:40:47.158886 kubelet[1825]: I0510 00:40:47.158816 1825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:40:47.159793 kubelet[1825]: I0510 00:40:47.159778 1825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:40:47.159894 kubelet[1825]: I0510 00:40:47.159877 1825 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:40:47.159962 kubelet[1825]: I0510 00:40:47.159905 1825 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:40:47.159962 kubelet[1825]: E0510 00:40:47.159951 1825 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:40:47.160627 kubelet[1825]: W0510 00:40:47.160518 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.160627 kubelet[1825]: E0510 00:40:47.160564 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:47.163266 kubelet[1825]: I0510 00:40:47.163243 1825 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:40:47.163266 kubelet[1825]: I0510 00:40:47.163262 1825 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:40:47.163345 kubelet[1825]: I0510 00:40:47.163277 1825 state_mem.go:36] "Initialized new in-memory state store" May 10 00:40:47.242227 kubelet[1825]: I0510 00:40:47.242196 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:40:47.242501 kubelet[1825]: E0510 00:40:47.242478 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" May 10 00:40:47.260831 kubelet[1825]: E0510 00:40:47.260774 1825 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:40:47.343779 kubelet[1825]: E0510 00:40:47.343655 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="400ms" May 10 00:40:47.444159 kubelet[1825]: I0510 00:40:47.444098 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:40:47.444670 kubelet[1825]: E0510 00:40:47.444623 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" May 10 00:40:47.445836 kubelet[1825]: I0510 00:40:47.445796 1825 policy_none.go:49] "None policy: Start" May 10 00:40:47.446350 kubelet[1825]: I0510 00:40:47.446321 1825 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:40:47.446350 kubelet[1825]: I0510 00:40:47.446343 1825 state_mem.go:35] "Initializing new in-memory state store" May 10 00:40:47.452435 kubelet[1825]: I0510 00:40:47.452388 1825 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:40:47.452658 kubelet[1825]: I0510 00:40:47.452604 1825 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:40:47.452776 kubelet[1825]: I0510 00:40:47.452753 1825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:40:47.454056 kubelet[1825]: E0510 00:40:47.454037 1825 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 10 00:40:47.461331 kubelet[1825]: I0510 00:40:47.461262 1825 topology_manager.go:215] "Topology Admit Handler" podUID="29ede830b3e802f0ec2272f2dbe78386" podNamespace="kube-system" podName="kube-apiserver-localhost" May 10 00:40:47.462322 kubelet[1825]: I0510 00:40:47.462279 1825 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 10 00:40:47.463276 kubelet[1825]: I0510 00:40:47.463246 1825 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 10 00:40:47.543613 kubelet[1825]: I0510 00:40:47.543568 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ede830b3e802f0ec2272f2dbe78386-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"29ede830b3e802f0ec2272f2dbe78386\") " pod="kube-system/kube-apiserver-localhost" May 10 00:40:47.543613 kubelet[1825]: I0510 00:40:47.543612 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ede830b3e802f0ec2272f2dbe78386-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"29ede830b3e802f0ec2272f2dbe78386\") " pod="kube-system/kube-apiserver-localhost" May 10 00:40:47.543834 kubelet[1825]: I0510 00:40:47.543635 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 10 00:40:47.543834 kubelet[1825]: I0510 00:40:47.543672 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ede830b3e802f0ec2272f2dbe78386-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"29ede830b3e802f0ec2272f2dbe78386\") " pod="kube-system/kube-apiserver-localhost" May 10 00:40:47.543834 kubelet[1825]: I0510 00:40:47.543689 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:47.543834 kubelet[1825]: I0510 00:40:47.543730 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:47.543834 kubelet[1825]: I0510 00:40:47.543751 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:47.543955 kubelet[1825]: I0510 00:40:47.543769 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:47.543955 kubelet[1825]: I0510 00:40:47.543854 1825 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:47.744435 kubelet[1825]: E0510 00:40:47.744336 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="800ms" May 10 00:40:47.765858 kubelet[1825]: E0510 00:40:47.765808 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:47.766265 kubelet[1825]: E0510 00:40:47.766211 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:47.766574 env[1307]: time="2025-05-10T00:40:47.766535390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:29ede830b3e802f0ec2272f2dbe78386,Namespace:kube-system,Attempt:0,}" May 10 00:40:47.767047 env[1307]: time="2025-05-10T00:40:47.766995791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 10 00:40:47.769589 kubelet[1825]: E0510 00:40:47.769249 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:47.769717 env[1307]: time="2025-05-10T00:40:47.769679700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 10 00:40:47.846473 kubelet[1825]: I0510 00:40:47.846434 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:40:47.846919 kubelet[1825]: E0510 00:40:47.846833 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" May 10 00:40:48.067627 kubelet[1825]: W0510 00:40:48.067431 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.067627 kubelet[1825]: E0510 00:40:48.067516 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.68:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.099398 kubelet[1825]: W0510 00:40:48.099311 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.099398 kubelet[1825]: E0510 00:40:48.099409 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.372815 kubelet[1825]: W0510 00:40:48.372713 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.372815 kubelet[1825]: E0510 00:40:48.372809 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.495862 kubelet[1825]: W0510 00:40:48.495795 1825 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.495862 kubelet[1825]: E0510 00:40:48.495856 1825 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.68:6443: connect: connection refused May 10 00:40:48.537929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048767758.mount: Deactivated successfully. May 10 00:40:48.545558 kubelet[1825]: E0510 00:40:48.545510 1825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.68:6443: connect: connection refused" interval="1.6s" May 10 00:40:48.546673 env[1307]: time="2025-05-10T00:40:48.546629482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.559677 env[1307]: time="2025-05-10T00:40:48.559626516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.561748 env[1307]: time="2025-05-10T00:40:48.561709958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.562775 env[1307]: time="2025-05-10T00:40:48.562713585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.565219 env[1307]: time="2025-05-10T00:40:48.565189415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.567581 env[1307]: time="2025-05-10T00:40:48.567553556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.568910 env[1307]: time="2025-05-10T00:40:48.568880920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.570109 env[1307]: time="2025-05-10T00:40:48.570076623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.571555 env[1307]: time="2025-05-10T00:40:48.571525668Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.572867 env[1307]: time="2025-05-10T00:40:48.572841830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.574396 env[1307]: time="2025-05-10T00:40:48.574336759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.576136 env[1307]: time="2025-05-10T00:40:48.576101955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:40:48.604877 env[1307]: time="2025-05-10T00:40:48.604753457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:40:48.605036 env[1307]: time="2025-05-10T00:40:48.604898035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:40:48.605036 env[1307]: time="2025-05-10T00:40:48.604936774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:40:48.605351 env[1307]: time="2025-05-10T00:40:48.605299792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6715bdf48edb3c7c5b4bc0e46b6effb631f3bfcc4adf074003fee8dd84609aec pid=1872 runtime=io.containerd.runc.v2 May 10 00:40:48.608941 env[1307]: time="2025-05-10T00:40:48.608805133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:40:48.608941 env[1307]: time="2025-05-10T00:40:48.608838091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:40:48.608941 env[1307]: time="2025-05-10T00:40:48.608847741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:40:48.609180 env[1307]: time="2025-05-10T00:40:48.609143561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49e8350145d528f21443a14b9c8b0a49401037f3901e5971d0f16b71fe2bf792 pid=1874 runtime=io.containerd.runc.v2 May 10 00:40:48.639672 env[1307]: time="2025-05-10T00:40:48.638115653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:40:48.639672 env[1307]: time="2025-05-10T00:40:48.638172700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:40:48.639672 env[1307]: time="2025-05-10T00:40:48.638187962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:40:48.639904 env[1307]: time="2025-05-10T00:40:48.638905610Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a6c0fc0722d2c0814479913b7f5ebad94329350b2b76dc7f0efe4327d9aa9b4 pid=1919 runtime=io.containerd.runc.v2 May 10 00:40:48.811229 kubelet[1825]: I0510 00:40:48.811184 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:40:48.815031 kubelet[1825]: E0510 00:40:48.815005 1825 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.68:6443/api/v1/nodes\": dial tcp 10.0.0.68:6443: connect: connection refused" node="localhost" May 10 00:40:48.870582 env[1307]: time="2025-05-10T00:40:48.870529151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:29ede830b3e802f0ec2272f2dbe78386,Namespace:kube-system,Attempt:0,} returns sandbox id \"49e8350145d528f21443a14b9c8b0a49401037f3901e5971d0f16b71fe2bf792\"" May 10 00:40:48.871594 kubelet[1825]: E0510 00:40:48.871567 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:48.875006 env[1307]: time="2025-05-10T00:40:48.874970169Z" level=info msg="CreateContainer within sandbox \"49e8350145d528f21443a14b9c8b0a49401037f3901e5971d0f16b71fe2bf792\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:40:48.876113 env[1307]: time="2025-05-10T00:40:48.876076889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"6715bdf48edb3c7c5b4bc0e46b6effb631f3bfcc4adf074003fee8dd84609aec\"" May 10 00:40:48.876789 kubelet[1825]: E0510 00:40:48.876606 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:48.878670 env[1307]: time="2025-05-10T00:40:48.878636462Z" level=info msg="CreateContainer within sandbox \"6715bdf48edb3c7c5b4bc0e46b6effb631f3bfcc4adf074003fee8dd84609aec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:40:48.895678 env[1307]: time="2025-05-10T00:40:48.895220184Z" level=info msg="CreateContainer within sandbox \"49e8350145d528f21443a14b9c8b0a49401037f3901e5971d0f16b71fe2bf792\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b7697a0e31effe2308454ec2d9e6a4d873475d7abcec923111b697167ca9ad75\"" May 10 00:40:48.895977 env[1307]: time="2025-05-10T00:40:48.895416267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a6c0fc0722d2c0814479913b7f5ebad94329350b2b76dc7f0efe4327d9aa9b4\"" May 10 00:40:48.896691 env[1307]: time="2025-05-10T00:40:48.896647174Z" level=info msg="StartContainer for \"b7697a0e31effe2308454ec2d9e6a4d873475d7abcec923111b697167ca9ad75\"" May 10 00:40:48.896878 kubelet[1825]: E0510 00:40:48.896781 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:48.898496 env[1307]: time="2025-05-10T00:40:48.898454646Z" level=info msg="CreateContainer within sandbox \"8a6c0fc0722d2c0814479913b7f5ebad94329350b2b76dc7f0efe4327d9aa9b4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:40:48.907941 env[1307]: time="2025-05-10T00:40:48.907864040Z" level=info msg="CreateContainer within sandbox \"6715bdf48edb3c7c5b4bc0e46b6effb631f3bfcc4adf074003fee8dd84609aec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"90c828970a1be0ccdf9e33644c7e0bedff6e6c7e8941052558c50f164cf211ff\"" May 10 00:40:48.908728 env[1307]: time="2025-05-10T00:40:48.908677626Z" level=info msg="StartContainer for \"90c828970a1be0ccdf9e33644c7e0bedff6e6c7e8941052558c50f164cf211ff\"" May 10 00:40:48.925421 env[1307]: time="2025-05-10T00:40:48.924494980Z" level=info msg="CreateContainer within sandbox \"8a6c0fc0722d2c0814479913b7f5ebad94329350b2b76dc7f0efe4327d9aa9b4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3a825b5c75f694d6bba6ad841ca1cdcdc47edfa7949e98fcb7e679486965277\"" May 10 00:40:48.925764 env[1307]: time="2025-05-10T00:40:48.925742830Z" level=info msg="StartContainer for \"f3a825b5c75f694d6bba6ad841ca1cdcdc47edfa7949e98fcb7e679486965277\"" May 10 00:40:49.030052 env[1307]: time="2025-05-10T00:40:49.030003857Z" level=info msg="StartContainer for \"b7697a0e31effe2308454ec2d9e6a4d873475d7abcec923111b697167ca9ad75\" returns successfully" May 10 00:40:49.045112 env[1307]: time="2025-05-10T00:40:49.045060707Z" level=info msg="StartContainer for \"90c828970a1be0ccdf9e33644c7e0bedff6e6c7e8941052558c50f164cf211ff\" returns successfully" May 10 00:40:49.065188 env[1307]: time="2025-05-10T00:40:49.065097962Z" level=info msg="StartContainer for \"f3a825b5c75f694d6bba6ad841ca1cdcdc47edfa7949e98fcb7e679486965277\" returns successfully" May 10 00:40:49.098798 kubelet[1825]: E0510 00:40:49.098657 1825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e0395bd08b2c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 00:40:47.122805444 +0000 UTC m=+0.337368799,LastTimestamp:2025-05-10 00:40:47.122805444 +0000 UTC m=+0.337368799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 00:40:49.167743 kubelet[1825]: E0510 00:40:49.167606 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:49.169510 kubelet[1825]: E0510 00:40:49.169483 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:49.174361 kubelet[1825]: E0510 00:40:49.174318 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:50.175855 kubelet[1825]: E0510 00:40:50.175796 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:50.249010 kubelet[1825]: E0510 00:40:50.248948 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:50.417348 kubelet[1825]: I0510 00:40:50.417285 1825 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:40:50.773889 kubelet[1825]: E0510 00:40:50.773803 1825 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 10 00:40:50.941314 kubelet[1825]: I0510 00:40:50.941260 1825 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 10 00:40:51.132291 kubelet[1825]: I0510 00:40:51.132214 1825 apiserver.go:52] "Watching apiserver" May 10 00:40:51.141807 kubelet[1825]: I0510 00:40:51.141769 1825 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:40:52.613531 kubelet[1825]: E0510 00:40:52.613468 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:53.178667 kubelet[1825]: E0510 00:40:53.178613 1825 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:53.224747 systemd[1]: Reloading. May 10 00:40:53.291959 /usr/lib/systemd/system-generators/torcx-generator[2120]: time="2025-05-10T00:40:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:40:53.291997 /usr/lib/systemd/system-generators/torcx-generator[2120]: time="2025-05-10T00:40:53Z" level=info msg="torcx already run" May 10 00:40:53.371833 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:40:53.371854 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:40:53.391215 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:40:53.468416 systemd[1]: Stopping kubelet.service... May 10 00:40:53.483757 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:40:53.484118 systemd[1]: Stopped kubelet.service. May 10 00:40:53.486277 systemd[1]: Starting kubelet.service... May 10 00:40:53.565514 systemd[1]: Started kubelet.service. May 10 00:40:53.640630 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:40:53.640630 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:40:53.640630 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:40:53.641091 kubelet[2175]: I0510 00:40:53.640674 2175 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:40:53.644901 kubelet[2175]: I0510 00:40:53.644874 2175 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:40:53.644901 kubelet[2175]: I0510 00:40:53.644896 2175 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:40:53.645083 kubelet[2175]: I0510 00:40:53.645057 2175 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:40:53.649327 kubelet[2175]: I0510 00:40:53.649300 2175 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:40:53.650339 kubelet[2175]: I0510 00:40:53.650270 2175 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:40:53.657671 kubelet[2175]: I0510 00:40:53.657636 2175 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:40:53.658072 kubelet[2175]: I0510 00:40:53.658033 2175 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:40:53.658230 kubelet[2175]: I0510 00:40:53.658063 2175 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:40:53.658317 kubelet[2175]: I0510 00:40:53.658242 2175 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:40:53.658317 kubelet[2175]: I0510 00:40:53.658251 2175 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:40:53.658317 kubelet[2175]: I0510 00:40:53.658286 2175 state_mem.go:36] "Initialized new in-memory state store" May 10 00:40:53.658425 kubelet[2175]: I0510 00:40:53.658357 2175 kubelet.go:400] "Attempting to sync node with API server" May 10 00:40:53.658425 kubelet[2175]: I0510 00:40:53.658385 2175 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:40:53.658425 kubelet[2175]: I0510 00:40:53.658405 2175 kubelet.go:312] "Adding apiserver pod source" May 10 00:40:53.658425 kubelet[2175]: I0510 00:40:53.658420 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:40:53.659160 kubelet[2175]: I0510 00:40:53.659136 2175 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:40:53.659275 kubelet[2175]: I0510 00:40:53.659267 2175 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:40:53.659702 kubelet[2175]: I0510 00:40:53.659682 2175 server.go:1264] "Started kubelet" May 10 00:40:53.661248 kubelet[2175]: I0510 00:40:53.661223 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:40:53.663588 kubelet[2175]: I0510 00:40:53.663543 2175 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:40:53.664458 kubelet[2175]: I0510 00:40:53.664338 2175 server.go:455] "Adding debug handlers to kubelet server" May 10 00:40:53.665120 kubelet[2175]: I0510 00:40:53.665072 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:40:53.670656 kubelet[2175]: I0510 00:40:53.665268 2175 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:40:53.670656 kubelet[2175]: I0510 00:40:53.667469 2175 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:40:53.670656 kubelet[2175]: I0510 00:40:53.667603 2175 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:40:53.670656 kubelet[2175]: I0510 00:40:53.667728 2175 reconciler.go:26] "Reconciler: start to sync state" May 10 00:40:53.670656 kubelet[2175]: E0510 00:40:53.667880 2175 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:40:53.670656 kubelet[2175]: I0510 00:40:53.669236 2175 factory.go:221] Registration of the systemd container factory successfully May 10 00:40:53.670656 kubelet[2175]: I0510 00:40:53.669382 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:40:53.673489 kubelet[2175]: I0510 00:40:53.673472 2175 factory.go:221] Registration of the containerd container factory successfully May 10 00:40:53.675456 kubelet[2175]: I0510 00:40:53.675434 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:40:53.686491 kubelet[2175]: I0510 00:40:53.686439 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:40:53.686491 kubelet[2175]: I0510 00:40:53.686478 2175 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:40:53.686491 kubelet[2175]: I0510 00:40:53.686504 2175 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:40:53.686685 kubelet[2175]: E0510 00:40:53.686549 2175 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:40:53.718874 kubelet[2175]: I0510 00:40:53.718759 2175 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:40:53.718874 kubelet[2175]: I0510 00:40:53.718784 2175 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:40:53.718874 kubelet[2175]: I0510 00:40:53.718812 2175 state_mem.go:36] "Initialized new in-memory state store" May 10 00:40:53.719092 kubelet[2175]: I0510 00:40:53.719004 2175 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:40:53.719092 kubelet[2175]: I0510 00:40:53.719015 2175 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:40:53.719092 kubelet[2175]: I0510 00:40:53.719034 2175 policy_none.go:49] "None policy: Start" May 10 00:40:53.720080 kubelet[2175]: I0510 00:40:53.720065 2175 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:40:53.720080 kubelet[2175]: I0510 00:40:53.720082 2175 state_mem.go:35] "Initializing new in-memory state store" May 10 00:40:53.720199 kubelet[2175]: I0510 00:40:53.720185 2175 state_mem.go:75] "Updated machine memory state" May 10 00:40:53.721403 kubelet[2175]: I0510 00:40:53.721380 2175 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:40:53.721572 kubelet[2175]: I0510 00:40:53.721533 2175 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:40:53.721660 kubelet[2175]: I0510 00:40:53.721636 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:40:53.770737 kubelet[2175]: I0510 00:40:53.770703 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 10 00:40:53.787065 kubelet[2175]: I0510 00:40:53.786969 2175 topology_manager.go:215] "Topology Admit Handler" podUID="29ede830b3e802f0ec2272f2dbe78386" podNamespace="kube-system" podName="kube-apiserver-localhost" May 10 00:40:53.787226 kubelet[2175]: I0510 00:40:53.787106 2175 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 10 00:40:53.787226 kubelet[2175]: I0510 00:40:53.787162 2175 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 10 00:40:53.800014 kubelet[2175]: I0510 00:40:53.799950 2175 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 10 00:40:53.800318 kubelet[2175]: I0510 00:40:53.800072 2175 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 10 00:40:53.958735 kubelet[2175]: E0510 00:40:53.958619 2175 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 10 00:40:53.968658 kubelet[2175]: I0510 00:40:53.968625 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:53.968784 kubelet[2175]: I0510 00:40:53.968665 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29ede830b3e802f0ec2272f2dbe78386-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"29ede830b3e802f0ec2272f2dbe78386\") " pod="kube-system/kube-apiserver-localhost" May 10 00:40:53.968784 kubelet[2175]: I0510 00:40:53.968685 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:53.968784 kubelet[2175]: I0510 00:40:53.968703 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:53.968784 kubelet[2175]: I0510 00:40:53.968717 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:53.968784 kubelet[2175]: I0510 00:40:53.968731 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:40:53.968945 kubelet[2175]: I0510 00:40:53.968745 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 10 00:40:53.968945 kubelet[2175]: I0510 00:40:53.968760 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29ede830b3e802f0ec2272f2dbe78386-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"29ede830b3e802f0ec2272f2dbe78386\") " pod="kube-system/kube-apiserver-localhost" May 10 00:40:53.968945 kubelet[2175]: I0510 00:40:53.968773 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29ede830b3e802f0ec2272f2dbe78386-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"29ede830b3e802f0ec2272f2dbe78386\") " pod="kube-system/kube-apiserver-localhost" May 10 00:40:54.101782 kubelet[2175]: E0510 00:40:54.101741 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:54.101943 kubelet[2175]: E0510 00:40:54.101751 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:54.259495 kubelet[2175]: E0510 00:40:54.259384 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:54.374495 sudo[2208]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:40:54.374726 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:40:54.659036 kubelet[2175]: I0510 00:40:54.658966 2175 apiserver.go:52] "Watching apiserver" May 10 00:40:54.668060 kubelet[2175]: I0510 00:40:54.667986 2175 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:40:54.729771 kubelet[2175]: E0510 00:40:54.729718 2175 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 10 00:40:54.730091 kubelet[2175]: E0510 00:40:54.730048 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:54.731228 kubelet[2175]: E0510 00:40:54.730892 2175 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 10 00:40:54.731448 kubelet[2175]: E0510 00:40:54.731427 2175 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 10 00:40:54.731576 kubelet[2175]: E0510 00:40:54.731549 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:54.732216 kubelet[2175]: E0510 00:40:54.731950 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:54.856940 sudo[2208]: pam_unix(sudo:session): session closed for user root May 10 00:40:54.899477 kubelet[2175]: I0510 00:40:54.899358 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.899338459 podStartE2EDuration="1.899338459s" podCreationTimestamp="2025-05-10 00:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:40:54.844165191 +0000 UTC m=+1.273746968" watchObservedRunningTime="2025-05-10 00:40:54.899338459 +0000 UTC m=+1.328920236" May 10 00:40:54.944623 kubelet[2175]: I0510 00:40:54.944470 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.944447381 podStartE2EDuration="2.944447381s" podCreationTimestamp="2025-05-10 00:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:40:54.899797123 +0000 UTC m=+1.329378910" watchObservedRunningTime="2025-05-10 00:40:54.944447381 +0000 UTC m=+1.374029158" May 10 00:40:55.694452 kubelet[2175]: E0510 00:40:55.694414 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:55.694452 kubelet[2175]: E0510 00:40:55.694430 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:55.695851 kubelet[2175]: E0510 00:40:55.694694 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:40:56.290933 sudo[1435]: pam_unix(sudo:session): session closed for user root May 10 00:40:56.292508 sshd[1431]: pam_unix(sshd:session): session closed for user core May 10 00:40:56.295140 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:55408.service: Deactivated successfully. May 10 00:40:56.296244 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:40:56.296257 systemd-logind[1299]: Session 5 logged out. Waiting for processes to exit. May 10 00:40:56.296998 systemd-logind[1299]: Removed session 5. May 10 00:40:57.684187 kubelet[2175]: E0510 00:40:57.684113 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:00.454190 kubelet[2175]: E0510 00:41:00.454130 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:00.471899 kubelet[2175]: I0510 00:41:00.471831 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.471812102 podStartE2EDuration="7.471812102s" podCreationTimestamp="2025-05-10 00:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:40:54.94475318 +0000 UTC m=+1.374334967" watchObservedRunningTime="2025-05-10 00:41:00.471812102 +0000 UTC m=+6.901393879" May 10 00:41:00.702523 kubelet[2175]: E0510 00:41:00.702481 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:01.703439 kubelet[2175]: E0510 00:41:01.703403 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:02.348335 kubelet[2175]: E0510 00:41:02.348281 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:02.705177 kubelet[2175]: E0510 00:41:02.704655 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:04.051230 update_engine[1300]: I0510 00:41:04.051154 1300 update_attempter.cc:509] Updating boot flags... May 10 00:41:07.688776 kubelet[2175]: E0510 00:41:07.688735 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:08.788506 kubelet[2175]: I0510 00:41:08.788454 2175 topology_manager.go:215] "Topology Admit Handler" podUID="2058d697-709e-47f2-9e0b-7d3b8998b321" podNamespace="kube-system" podName="cilium-operator-599987898-xrglp" May 10 00:41:08.865063 kubelet[2175]: I0510 00:41:08.864981 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxr9\" (UniqueName: \"kubernetes.io/projected/2058d697-709e-47f2-9e0b-7d3b8998b321-kube-api-access-lkxr9\") pod \"cilium-operator-599987898-xrglp\" (UID: \"2058d697-709e-47f2-9e0b-7d3b8998b321\") " pod="kube-system/cilium-operator-599987898-xrglp" May 10 00:41:08.865063 kubelet[2175]: I0510 00:41:08.865062 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2058d697-709e-47f2-9e0b-7d3b8998b321-cilium-config-path\") pod \"cilium-operator-599987898-xrglp\" (UID: \"2058d697-709e-47f2-9e0b-7d3b8998b321\") " pod="kube-system/cilium-operator-599987898-xrglp" May 10 00:41:08.883665 kubelet[2175]: I0510 00:41:08.883618 2175 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:41:08.884132 env[1307]: time="2025-05-10T00:41:08.884093553Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:41:08.884478 kubelet[2175]: I0510 00:41:08.884429 2175 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:41:09.093440 kubelet[2175]: E0510 00:41:09.093272 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:09.095206 env[1307]: time="2025-05-10T00:41:09.094746295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xrglp,Uid:2058d697-709e-47f2-9e0b-7d3b8998b321,Namespace:kube-system,Attempt:0,}" May 10 00:41:09.426690 kubelet[2175]: I0510 00:41:09.426619 2175 topology_manager.go:215] "Topology Admit Handler" podUID="88ef9173-0e62-43cc-93d0-d752bbeb36c4" podNamespace="kube-system" podName="kube-proxy-h6cv6" May 10 00:41:09.440489 kubelet[2175]: I0510 00:41:09.440435 2175 topology_manager.go:215] "Topology Admit Handler" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" podNamespace="kube-system" podName="cilium-8srhz" May 10 00:41:09.469792 kubelet[2175]: I0510 00:41:09.469708 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-cgroup\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.469792 kubelet[2175]: I0510 00:41:09.469770 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88ef9173-0e62-43cc-93d0-d752bbeb36c4-xtables-lock\") pod \"kube-proxy-h6cv6\" (UID: \"88ef9173-0e62-43cc-93d0-d752bbeb36c4\") " pod="kube-system/kube-proxy-h6cv6" May 10 00:41:09.469792 kubelet[2175]: I0510 00:41:09.469790 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwcq\" (UniqueName: \"kubernetes.io/projected/88ef9173-0e62-43cc-93d0-d752bbeb36c4-kube-api-access-mwwcq\") pod \"kube-proxy-h6cv6\" (UID: \"88ef9173-0e62-43cc-93d0-d752bbeb36c4\") " pod="kube-system/kube-proxy-h6cv6" May 10 00:41:09.469792 kubelet[2175]: I0510 00:41:09.469808 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-net\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470144 kubelet[2175]: I0510 00:41:09.469882 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-kernel\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470144 kubelet[2175]: I0510 00:41:09.469929 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hubble-tls\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470144 kubelet[2175]: I0510 00:41:09.469956 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88ef9173-0e62-43cc-93d0-d752bbeb36c4-lib-modules\") pod \"kube-proxy-h6cv6\" (UID: \"88ef9173-0e62-43cc-93d0-d752bbeb36c4\") " pod="kube-system/kube-proxy-h6cv6" May 10 00:41:09.470144 kubelet[2175]: I0510 00:41:09.469979 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-bpf-maps\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470144 kubelet[2175]: I0510 00:41:09.469999 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-config-path\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470144 kubelet[2175]: I0510 00:41:09.470016 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-run\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470336 kubelet[2175]: I0510 00:41:09.470033 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hostproc\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470336 kubelet[2175]: I0510 00:41:09.470048 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-xtables-lock\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470336 kubelet[2175]: I0510 00:41:09.470066 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88ef9173-0e62-43cc-93d0-d752bbeb36c4-kube-proxy\") pod \"kube-proxy-h6cv6\" (UID: \"88ef9173-0e62-43cc-93d0-d752bbeb36c4\") " pod="kube-system/kube-proxy-h6cv6" May 10 00:41:09.470336 kubelet[2175]: I0510 00:41:09.470078 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-lib-modules\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470336 kubelet[2175]: I0510 00:41:09.470091 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwd79\" (UniqueName: \"kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-kube-api-access-mwd79\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470336 kubelet[2175]: I0510 00:41:09.470105 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cni-path\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470574 kubelet[2175]: I0510 00:41:09.470141 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-etc-cni-netd\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.470574 kubelet[2175]: I0510 00:41:09.470158 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1382a6d9-ea67-4e19-ba10-0fc67a849a35-clustermesh-secrets\") pod \"cilium-8srhz\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " pod="kube-system/cilium-8srhz" May 10 00:41:09.569682 env[1307]: time="2025-05-10T00:41:09.569611401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:41:09.569682 env[1307]: time="2025-05-10T00:41:09.569650066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:41:09.569682 env[1307]: time="2025-05-10T00:41:09.569659643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:41:09.569931 env[1307]: time="2025-05-10T00:41:09.569810635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7 pid=2278 runtime=io.containerd.runc.v2 May 10 00:41:09.627678 env[1307]: time="2025-05-10T00:41:09.627635758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xrglp,Uid:2058d697-709e-47f2-9e0b-7d3b8998b321,Namespace:kube-system,Attempt:0,} returns sandbox id \"46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7\"" May 10 00:41:09.628430 kubelet[2175]: E0510 00:41:09.628409 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:09.630343 env[1307]: time="2025-05-10T00:41:09.630317421Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:41:09.730482 kubelet[2175]: E0510 00:41:09.729674 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:09.730646 env[1307]: time="2025-05-10T00:41:09.730077690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6cv6,Uid:88ef9173-0e62-43cc-93d0-d752bbeb36c4,Namespace:kube-system,Attempt:0,}" May 10 00:41:09.743843 kubelet[2175]: E0510 00:41:09.743809 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:09.748292 env[1307]: time="2025-05-10T00:41:09.748205925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:41:09.748457 env[1307]: time="2025-05-10T00:41:09.748294106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:41:09.748457 env[1307]: time="2025-05-10T00:41:09.748332600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:41:09.748569 env[1307]: time="2025-05-10T00:41:09.748530792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30271656aeb284cece2f9901bb0f4b21a48369c32234a537350224f44c1515bc pid=2325 runtime=io.containerd.runc.v2 May 10 00:41:09.748763 env[1307]: time="2025-05-10T00:41:09.748716981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8srhz,Uid:1382a6d9-ea67-4e19-ba10-0fc67a849a35,Namespace:kube-system,Attempt:0,}" May 10 00:41:09.767191 env[1307]: time="2025-05-10T00:41:09.767101982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:41:09.767191 env[1307]: time="2025-05-10T00:41:09.767189361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:41:09.767440 env[1307]: time="2025-05-10T00:41:09.767213648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:41:09.767825 env[1307]: time="2025-05-10T00:41:09.767441287Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0 pid=2352 runtime=io.containerd.runc.v2 May 10 00:41:09.788541 env[1307]: time="2025-05-10T00:41:09.788480418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6cv6,Uid:88ef9173-0e62-43cc-93d0-d752bbeb36c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"30271656aeb284cece2f9901bb0f4b21a48369c32234a537350224f44c1515bc\"" May 10 00:41:09.789502 kubelet[2175]: E0510 00:41:09.789475 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:09.792171 env[1307]: time="2025-05-10T00:41:09.792125097Z" level=info msg="CreateContainer within sandbox \"30271656aeb284cece2f9901bb0f4b21a48369c32234a537350224f44c1515bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:41:09.804514 env[1307]: time="2025-05-10T00:41:09.804460580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8srhz,Uid:1382a6d9-ea67-4e19-ba10-0fc67a849a35,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\"" May 10 00:41:09.804864 kubelet[2175]: E0510 00:41:09.804827 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:09.815911 env[1307]: time="2025-05-10T00:41:09.815857453Z" level=info msg="CreateContainer within sandbox \"30271656aeb284cece2f9901bb0f4b21a48369c32234a537350224f44c1515bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6097519ed49b962d66d6326d79ca5fc5ef4205ca73c7be23274eb6cba25597cd\"" May 10 00:41:09.816592 env[1307]: time="2025-05-10T00:41:09.816499671Z" level=info msg="StartContainer for \"6097519ed49b962d66d6326d79ca5fc5ef4205ca73c7be23274eb6cba25597cd\"" May 10 00:41:09.866917 env[1307]: time="2025-05-10T00:41:09.866826362Z" level=info msg="StartContainer for \"6097519ed49b962d66d6326d79ca5fc5ef4205ca73c7be23274eb6cba25597cd\" returns successfully" May 10 00:41:10.719485 kubelet[2175]: E0510 00:41:10.719433 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:11.660488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670299324.mount: Deactivated successfully. May 10 00:41:12.357846 env[1307]: time="2025-05-10T00:41:12.357786295Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:41:12.359909 env[1307]: time="2025-05-10T00:41:12.359843749Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:41:12.362119 env[1307]: time="2025-05-10T00:41:12.362064657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:41:12.362577 env[1307]: time="2025-05-10T00:41:12.362546113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:41:12.364034 env[1307]: time="2025-05-10T00:41:12.363968127Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:41:12.365449 env[1307]: time="2025-05-10T00:41:12.365396101Z" level=info msg="CreateContainer within sandbox \"46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:41:12.378357 env[1307]: time="2025-05-10T00:41:12.378280585Z" level=info msg="CreateContainer within sandbox \"46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\"" May 10 00:41:12.379044 env[1307]: time="2025-05-10T00:41:12.378987714Z" level=info msg="StartContainer for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\"" May 10 00:41:12.421531 env[1307]: time="2025-05-10T00:41:12.421440939Z" level=info msg="StartContainer for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" returns successfully" May 10 00:41:12.724604 kubelet[2175]: E0510 00:41:12.724459 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:12.735081 kubelet[2175]: I0510 00:41:12.734996 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xrglp" podStartSLOduration=2.001318276 podStartE2EDuration="4.734973663s" podCreationTimestamp="2025-05-10 00:41:08 +0000 UTC" firstStartedPulling="2025-05-10 00:41:09.62987647 +0000 UTC m=+16.059458247" lastFinishedPulling="2025-05-10 00:41:12.363531857 +0000 UTC m=+18.793113634" observedRunningTime="2025-05-10 00:41:12.734055147 +0000 UTC m=+19.163636964" watchObservedRunningTime="2025-05-10 00:41:12.734973663 +0000 UTC m=+19.164555460" May 10 00:41:12.735346 kubelet[2175]: I0510 00:41:12.735114 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6cv6" podStartSLOduration=3.735106808 podStartE2EDuration="3.735106808s" podCreationTimestamp="2025-05-10 00:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:41:10.727356438 +0000 UTC m=+17.156938215" watchObservedRunningTime="2025-05-10 00:41:12.735106808 +0000 UTC m=+19.164688606" May 10 00:41:13.726625 kubelet[2175]: E0510 00:41:13.726575 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:15.335152 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:33304.service. May 10 00:41:15.422841 sshd[2655]: Accepted publickey for core from 10.0.0.1 port 33304 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:15.424245 sshd[2655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:15.428582 systemd-logind[1299]: New session 6 of user core. May 10 00:41:15.429374 systemd[1]: Started session-6.scope. May 10 00:41:15.561306 sshd[2655]: pam_unix(sshd:session): session closed for user core May 10 00:41:15.563562 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:33304.service: Deactivated successfully. May 10 00:41:15.564714 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:41:15.565132 systemd-logind[1299]: Session 6 logged out. Waiting for processes to exit. May 10 00:41:15.565830 systemd-logind[1299]: Removed session 6. May 10 00:41:17.355437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231036837.mount: Deactivated successfully. May 10 00:41:20.564713 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:54554.service. May 10 00:41:21.187210 env[1307]: time="2025-05-10T00:41:21.187155724Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:41:21.189924 env[1307]: time="2025-05-10T00:41:21.189890231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:41:21.192591 env[1307]: time="2025-05-10T00:41:21.192537031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:41:21.193221 env[1307]: time="2025-05-10T00:41:21.193174327Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:41:21.196834 env[1307]: time="2025-05-10T00:41:21.196782752Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:41:21.203129 sshd[2670]: Accepted publickey for core from 10.0.0.1 port 54554 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:21.205006 sshd[2670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:21.210337 systemd-logind[1299]: New session 7 of user core. May 10 00:41:21.211520 systemd[1]: Started session-7.scope. May 10 00:41:21.217879 env[1307]: time="2025-05-10T00:41:21.217816754Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\"" May 10 00:41:21.218566 env[1307]: time="2025-05-10T00:41:21.218530766Z" level=info msg="StartContainer for \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\"" May 10 00:41:21.280146 env[1307]: time="2025-05-10T00:41:21.279623120Z" level=info msg="StartContainer for \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\" returns successfully" May 10 00:41:21.305131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122-rootfs.mount: Deactivated successfully. May 10 00:41:21.337322 sshd[2670]: pam_unix(sshd:session): session closed for user core May 10 00:41:21.339425 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:54554.service: Deactivated successfully. May 10 00:41:21.340514 systemd-logind[1299]: Session 7 logged out. Waiting for processes to exit. May 10 00:41:21.340592 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:41:21.341483 systemd-logind[1299]: Removed session 7. May 10 00:41:21.513643 env[1307]: time="2025-05-10T00:41:21.513477471Z" level=info msg="shim disconnected" id=2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122 May 10 00:41:21.513643 env[1307]: time="2025-05-10T00:41:21.513552214Z" level=warning msg="cleaning up after shim disconnected" id=2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122 namespace=k8s.io May 10 00:41:21.513643 env[1307]: time="2025-05-10T00:41:21.513564417Z" level=info msg="cleaning up dead shim" May 10 00:41:21.523044 env[1307]: time="2025-05-10T00:41:21.522970103Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:41:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2734 runtime=io.containerd.runc.v2\n" May 10 00:41:21.741471 kubelet[2175]: E0510 00:41:21.741347 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:21.743743 env[1307]: time="2025-05-10T00:41:21.743706515Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:41:21.836007 env[1307]: time="2025-05-10T00:41:21.835826720Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\"" May 10 00:41:21.836485 env[1307]: time="2025-05-10T00:41:21.836435211Z" level=info msg="StartContainer for \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\"" May 10 00:41:21.919167 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:41:21.919471 systemd[1]: Stopped systemd-sysctl.service. May 10 00:41:21.919649 systemd[1]: Stopping systemd-sysctl.service... May 10 00:41:21.921328 systemd[1]: Starting systemd-sysctl.service... May 10 00:41:21.929954 systemd[1]: Finished systemd-sysctl.service. May 10 00:41:21.940755 env[1307]: time="2025-05-10T00:41:21.940686069Z" level=info msg="StartContainer for \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\" returns successfully" May 10 00:41:22.046809 env[1307]: time="2025-05-10T00:41:22.046729227Z" level=info msg="shim disconnected" id=0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28 May 10 00:41:22.046809 env[1307]: time="2025-05-10T00:41:22.046787969Z" level=warning msg="cleaning up after shim disconnected" id=0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28 namespace=k8s.io May 10 00:41:22.046809 env[1307]: time="2025-05-10T00:41:22.046797568Z" level=info msg="cleaning up dead shim" May 10 00:41:22.053806 env[1307]: time="2025-05-10T00:41:22.053756327Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:41:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2800 runtime=io.containerd.runc.v2\n" May 10 00:41:22.744334 kubelet[2175]: E0510 00:41:22.744298 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:22.745862 env[1307]: time="2025-05-10T00:41:22.745819544Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:41:22.765516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544572569.mount: Deactivated successfully. May 10 00:41:22.768233 env[1307]: time="2025-05-10T00:41:22.768148302Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\"" May 10 00:41:22.768881 env[1307]: time="2025-05-10T00:41:22.768834311Z" level=info msg="StartContainer for \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\"" May 10 00:41:22.908586 env[1307]: time="2025-05-10T00:41:22.908456984Z" level=info msg="StartContainer for \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\" returns successfully" May 10 00:41:22.963392 env[1307]: time="2025-05-10T00:41:22.963312747Z" level=info msg="shim disconnected" id=fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07 May 10 00:41:22.963392 env[1307]: time="2025-05-10T00:41:22.963381147Z" level=warning msg="cleaning up after shim disconnected" id=fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07 namespace=k8s.io May 10 00:41:22.963392 env[1307]: time="2025-05-10T00:41:22.963390546Z" level=info msg="cleaning up dead shim" May 10 00:41:22.972041 env[1307]: time="2025-05-10T00:41:22.971960828Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:41:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2857 runtime=io.containerd.runc.v2\n" May 10 00:41:23.214686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07-rootfs.mount: Deactivated successfully. May 10 00:41:23.747785 kubelet[2175]: E0510 00:41:23.747203 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:23.749890 env[1307]: time="2025-05-10T00:41:23.749839322Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:41:23.766291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113951594.mount: Deactivated successfully. May 10 00:41:23.766876 env[1307]: time="2025-05-10T00:41:23.766826937Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\"" May 10 00:41:23.767581 env[1307]: time="2025-05-10T00:41:23.767547771Z" level=info msg="StartContainer for \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\"" May 10 00:41:23.813026 env[1307]: time="2025-05-10T00:41:23.812977483Z" level=info msg="StartContainer for \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\" returns successfully" May 10 00:41:23.831088 env[1307]: time="2025-05-10T00:41:23.831028445Z" level=info msg="shim disconnected" id=26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a May 10 00:41:23.831088 env[1307]: time="2025-05-10T00:41:23.831080875Z" level=warning msg="cleaning up after shim disconnected" id=26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a namespace=k8s.io May 10 00:41:23.831088 env[1307]: time="2025-05-10T00:41:23.831090373Z" level=info msg="cleaning up dead shim" May 10 00:41:23.838617 env[1307]: time="2025-05-10T00:41:23.838552555Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:41:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2912 runtime=io.containerd.runc.v2\n" May 10 00:41:24.214971 systemd[1]: run-containerd-runc-k8s.io-26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a-runc.iWmmHp.mount: Deactivated successfully. May 10 00:41:24.215111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a-rootfs.mount: Deactivated successfully. May 10 00:41:24.751352 kubelet[2175]: E0510 00:41:24.751312 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:24.753725 env[1307]: time="2025-05-10T00:41:24.753617305Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:41:24.770871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520754066.mount: Deactivated successfully. May 10 00:41:24.772724 env[1307]: time="2025-05-10T00:41:24.772685087Z" level=info msg="CreateContainer within sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\"" May 10 00:41:24.773248 env[1307]: time="2025-05-10T00:41:24.773216509Z" level=info msg="StartContainer for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\"" May 10 00:41:24.818003 env[1307]: time="2025-05-10T00:41:24.817951112Z" level=info msg="StartContainer for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" returns successfully" May 10 00:41:24.968206 kubelet[2175]: I0510 00:41:24.968134 2175 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:41:24.995397 kubelet[2175]: I0510 00:41:24.995256 2175 topology_manager.go:215] "Topology Admit Handler" podUID="8b3e7b8f-e332-4d39-a0f5-fe1655715420" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mlv4r" May 10 00:41:24.996990 kubelet[2175]: I0510 00:41:24.996929 2175 topology_manager.go:215] "Topology Admit Handler" podUID="0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lcsck" May 10 00:41:25.091093 kubelet[2175]: I0510 00:41:25.090923 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b3e7b8f-e332-4d39-a0f5-fe1655715420-config-volume\") pod \"coredns-7db6d8ff4d-mlv4r\" (UID: \"8b3e7b8f-e332-4d39-a0f5-fe1655715420\") " pod="kube-system/coredns-7db6d8ff4d-mlv4r" May 10 00:41:25.091343 kubelet[2175]: I0510 00:41:25.091304 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98-config-volume\") pod \"coredns-7db6d8ff4d-lcsck\" (UID: \"0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98\") " pod="kube-system/coredns-7db6d8ff4d-lcsck" May 10 00:41:25.091500 kubelet[2175]: I0510 00:41:25.091463 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqwj\" (UniqueName: \"kubernetes.io/projected/8b3e7b8f-e332-4d39-a0f5-fe1655715420-kube-api-access-hdqwj\") pod \"coredns-7db6d8ff4d-mlv4r\" (UID: \"8b3e7b8f-e332-4d39-a0f5-fe1655715420\") " pod="kube-system/coredns-7db6d8ff4d-mlv4r" May 10 00:41:25.091705 kubelet[2175]: I0510 00:41:25.091681 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j84gq\" (UniqueName: \"kubernetes.io/projected/0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98-kube-api-access-j84gq\") pod \"coredns-7db6d8ff4d-lcsck\" (UID: \"0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98\") " pod="kube-system/coredns-7db6d8ff4d-lcsck" May 10 00:41:25.308914 kubelet[2175]: E0510 00:41:25.308865 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:25.310967 kubelet[2175]: E0510 00:41:25.310947 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:25.350964 env[1307]: time="2025-05-10T00:41:25.350928283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlv4r,Uid:8b3e7b8f-e332-4d39-a0f5-fe1655715420,Namespace:kube-system,Attempt:0,}" May 10 00:41:25.351125 env[1307]: time="2025-05-10T00:41:25.350921069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lcsck,Uid:0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98,Namespace:kube-system,Attempt:0,}" May 10 00:41:25.756314 kubelet[2175]: E0510 00:41:25.756186 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:25.949981 kubelet[2175]: I0510 00:41:25.949861 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8srhz" podStartSLOduration=5.560982633 podStartE2EDuration="16.94984416s" podCreationTimestamp="2025-05-10 00:41:09 +0000 UTC" firstStartedPulling="2025-05-10 00:41:09.80551794 +0000 UTC m=+16.235099717" lastFinishedPulling="2025-05-10 00:41:21.194379457 +0000 UTC m=+27.623961244" observedRunningTime="2025-05-10 00:41:25.94962885 +0000 UTC m=+32.379210647" watchObservedRunningTime="2025-05-10 00:41:25.94984416 +0000 UTC m=+32.379425937" May 10 00:41:26.341092 systemd[1]: Started sshd@7-10.0.0.68:22-10.0.0.1:54558.service. May 10 00:41:26.387811 sshd[3073]: Accepted publickey for core from 10.0.0.1 port 54558 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:26.389176 sshd[3073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:26.393508 systemd-logind[1299]: New session 8 of user core. May 10 00:41:26.394647 systemd[1]: Started session-8.scope. May 10 00:41:26.511547 sshd[3073]: pam_unix(sshd:session): session closed for user core May 10 00:41:26.514048 systemd[1]: sshd@7-10.0.0.68:22-10.0.0.1:54558.service: Deactivated successfully. May 10 00:41:26.515133 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:41:26.516135 systemd-logind[1299]: Session 8 logged out. Waiting for processes to exit. May 10 00:41:26.516988 systemd-logind[1299]: Removed session 8. May 10 00:41:26.758346 kubelet[2175]: E0510 00:41:26.758289 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:26.861808 systemd-networkd[1086]: cilium_host: Link UP May 10 00:41:26.861931 systemd-networkd[1086]: cilium_net: Link UP May 10 00:41:26.864340 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 10 00:41:26.864443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:41:26.865027 systemd-networkd[1086]: cilium_net: Gained carrier May 10 00:41:26.865607 systemd-networkd[1086]: cilium_host: Gained carrier May 10 00:41:26.885336 systemd-networkd[1086]: cilium_net: Gained IPv6LL May 10 00:41:26.959064 systemd-networkd[1086]: cilium_vxlan: Link UP May 10 00:41:26.959071 systemd-networkd[1086]: cilium_vxlan: Gained carrier May 10 00:41:27.151664 systemd-networkd[1086]: cilium_host: Gained IPv6LL May 10 00:41:27.281413 kernel: NET: Registered PF_ALG protocol family May 10 00:41:27.760159 kubelet[2175]: E0510 00:41:27.760127 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:27.879154 systemd-networkd[1086]: lxc_health: Link UP May 10 00:41:27.889136 systemd-networkd[1086]: lxc_health: Gained carrier May 10 00:41:27.889385 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:41:28.172201 systemd-networkd[1086]: lxcc0991cce2b3a: Link UP May 10 00:41:28.179399 kernel: eth0: renamed from tmp08c9d May 10 00:41:28.185107 systemd-networkd[1086]: lxcc0991cce2b3a: Gained carrier May 10 00:41:28.185528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc0991cce2b3a: link becomes ready May 10 00:41:28.230552 systemd-networkd[1086]: cilium_vxlan: Gained IPv6LL May 10 00:41:28.422259 systemd-networkd[1086]: lxce39604ba34d3: Link UP May 10 00:41:28.429469 kernel: eth0: renamed from tmpb297c May 10 00:41:28.438693 systemd-networkd[1086]: lxce39604ba34d3: Gained carrier May 10 00:41:28.439678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce39604ba34d3: link becomes ready May 10 00:41:28.762104 kubelet[2175]: E0510 00:41:28.761972 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:29.638526 systemd-networkd[1086]: lxc_health: Gained IPv6LL May 10 00:41:29.638862 systemd-networkd[1086]: lxcc0991cce2b3a: Gained IPv6LL May 10 00:41:29.763001 kubelet[2175]: E0510 00:41:29.762970 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:29.767529 systemd-networkd[1086]: lxce39604ba34d3: Gained IPv6LL May 10 00:41:30.764806 kubelet[2175]: E0510 00:41:30.764760 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:31.515729 systemd[1]: Started sshd@8-10.0.0.68:22-10.0.0.1:33306.service. May 10 00:41:31.553749 env[1307]: time="2025-05-10T00:41:31.553641325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:41:31.553749 env[1307]: time="2025-05-10T00:41:31.553718581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:41:31.554187 env[1307]: time="2025-05-10T00:41:31.554138008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:41:31.554486 env[1307]: time="2025-05-10T00:41:31.554447767Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b297c20ed85cd948fa1290546405aaea43e48a665868e2d5cc3f267f33c89b81 pid=3493 runtime=io.containerd.runc.v2 May 10 00:41:31.568842 sshd[3483]: Accepted publickey for core from 10.0.0.1 port 33306 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:31.570269 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:31.575549 env[1307]: time="2025-05-10T00:41:31.575346454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:41:31.575549 env[1307]: time="2025-05-10T00:41:31.575404896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:41:31.575549 env[1307]: time="2025-05-10T00:41:31.575414764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:41:31.575925 env[1307]: time="2025-05-10T00:41:31.575849490Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08c9d2e7945bb007ad8ece3fc80a4112c49aefacf4e226b58eae39ed4afe9091 pid=3527 runtime=io.containerd.runc.v2 May 10 00:41:31.576435 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:41:31.577345 systemd[1]: Started session-9.scope. May 10 00:41:31.577780 systemd-logind[1299]: New session 9 of user core. May 10 00:41:31.602909 env[1307]: time="2025-05-10T00:41:31.602855498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlv4r,Uid:8b3e7b8f-e332-4d39-a0f5-fe1655715420,Namespace:kube-system,Attempt:0,} returns sandbox id \"b297c20ed85cd948fa1290546405aaea43e48a665868e2d5cc3f267f33c89b81\"" May 10 00:41:31.609645 kubelet[2175]: E0510 00:41:31.609608 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:31.612860 env[1307]: time="2025-05-10T00:41:31.612825637Z" level=info msg="CreateContainer within sandbox \"b297c20ed85cd948fa1290546405aaea43e48a665868e2d5cc3f267f33c89b81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:41:31.618741 systemd-resolved[1228]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:41:31.628711 env[1307]: time="2025-05-10T00:41:31.628656859Z" level=info msg="CreateContainer within sandbox \"b297c20ed85cd948fa1290546405aaea43e48a665868e2d5cc3f267f33c89b81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68ba6088fbcdf2dcfd8814ce12545564fa0731a480cbdf62a26a6801a924fc3a\"" May 10 00:41:31.629175 env[1307]: time="2025-05-10T00:41:31.629085533Z" level=info msg="StartContainer for \"68ba6088fbcdf2dcfd8814ce12545564fa0731a480cbdf62a26a6801a924fc3a\"" May 10 00:41:31.642818 env[1307]: time="2025-05-10T00:41:31.641861741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lcsck,Uid:0a6dc6e7-be5a-4645-9c3d-d5b18b3ade98,Namespace:kube-system,Attempt:0,} returns sandbox id \"08c9d2e7945bb007ad8ece3fc80a4112c49aefacf4e226b58eae39ed4afe9091\"" May 10 00:41:31.642973 kubelet[2175]: E0510 00:41:31.642657 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:31.644789 env[1307]: time="2025-05-10T00:41:31.644744367Z" level=info msg="CreateContainer within sandbox \"08c9d2e7945bb007ad8ece3fc80a4112c49aefacf4e226b58eae39ed4afe9091\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:41:31.674026 env[1307]: time="2025-05-10T00:41:31.673968819Z" level=info msg="CreateContainer within sandbox \"08c9d2e7945bb007ad8ece3fc80a4112c49aefacf4e226b58eae39ed4afe9091\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"788b7bf1ccba97398c4da867ea96648b6718cb41ee7c1a25a60fd92aabcae0d2\"" May 10 00:41:31.676292 env[1307]: time="2025-05-10T00:41:31.676047939Z" level=info msg="StartContainer for \"788b7bf1ccba97398c4da867ea96648b6718cb41ee7c1a25a60fd92aabcae0d2\"" May 10 00:41:31.696877 env[1307]: time="2025-05-10T00:41:31.696816449Z" level=info msg="StartContainer for \"68ba6088fbcdf2dcfd8814ce12545564fa0731a480cbdf62a26a6801a924fc3a\" returns successfully" May 10 00:41:31.726875 sshd[3483]: pam_unix(sshd:session): session closed for user core May 10 00:41:31.731747 systemd[1]: sshd@8-10.0.0.68:22-10.0.0.1:33306.service: Deactivated successfully. May 10 00:41:31.732958 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:41:31.733666 systemd-logind[1299]: Session 9 logged out. Waiting for processes to exit. May 10 00:41:31.734689 systemd-logind[1299]: Removed session 9. May 10 00:41:31.735215 env[1307]: time="2025-05-10T00:41:31.735151146Z" level=info msg="StartContainer for \"788b7bf1ccba97398c4da867ea96648b6718cb41ee7c1a25a60fd92aabcae0d2\" returns successfully" May 10 00:41:31.769208 kubelet[2175]: E0510 00:41:31.767706 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:31.769972 kubelet[2175]: E0510 00:41:31.769907 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:31.801206 kubelet[2175]: I0510 00:41:31.801115 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lcsck" podStartSLOduration=23.801083847 podStartE2EDuration="23.801083847s" podCreationTimestamp="2025-05-10 00:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:41:31.785265129 +0000 UTC m=+38.214846926" watchObservedRunningTime="2025-05-10 00:41:31.801083847 +0000 UTC m=+38.230665624" May 10 00:41:32.561684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568196352.mount: Deactivated successfully. May 10 00:41:32.772242 kubelet[2175]: E0510 00:41:32.772190 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:32.772242 kubelet[2175]: E0510 00:41:32.772251 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:32.781524 kubelet[2175]: I0510 00:41:32.781460 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mlv4r" podStartSLOduration=24.781444094 podStartE2EDuration="24.781444094s" podCreationTimestamp="2025-05-10 00:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:41:31.801759761 +0000 UTC m=+38.231341548" watchObservedRunningTime="2025-05-10 00:41:32.781444094 +0000 UTC m=+39.211025901" May 10 00:41:33.774099 kubelet[2175]: E0510 00:41:33.774052 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:33.774541 kubelet[2175]: E0510 00:41:33.774266 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:41:36.730292 systemd[1]: Started sshd@9-10.0.0.68:22-10.0.0.1:33108.service. May 10 00:41:36.772483 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 33108 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:36.779521 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:36.783386 systemd-logind[1299]: New session 10 of user core. May 10 00:41:36.784452 systemd[1]: Started session-10.scope. May 10 00:41:36.894199 sshd[3665]: pam_unix(sshd:session): session closed for user core May 10 00:41:36.897813 systemd[1]: Started sshd@10-10.0.0.68:22-10.0.0.1:33110.service. May 10 00:41:36.898644 systemd[1]: sshd@9-10.0.0.68:22-10.0.0.1:33108.service: Deactivated successfully. May 10 00:41:36.901454 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:41:36.902252 systemd-logind[1299]: Session 10 logged out. Waiting for processes to exit. May 10 00:41:36.903222 systemd-logind[1299]: Removed session 10. May 10 00:41:36.942905 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 33110 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:36.944072 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:36.947326 systemd-logind[1299]: New session 11 of user core. May 10 00:41:36.948305 systemd[1]: Started session-11.scope. May 10 00:41:37.106470 sshd[3679]: pam_unix(sshd:session): session closed for user core May 10 00:41:37.109950 systemd[1]: Started sshd@11-10.0.0.68:22-10.0.0.1:33126.service. May 10 00:41:37.110626 systemd[1]: sshd@10-10.0.0.68:22-10.0.0.1:33110.service: Deactivated successfully. May 10 00:41:37.112695 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:41:37.115170 systemd-logind[1299]: Session 11 logged out. Waiting for processes to exit. May 10 00:41:37.116953 systemd-logind[1299]: Removed session 11. May 10 00:41:37.162926 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 33126 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:37.164280 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:37.167806 systemd-logind[1299]: New session 12 of user core. May 10 00:41:37.168840 systemd[1]: Started session-12.scope. May 10 00:41:37.283824 sshd[3690]: pam_unix(sshd:session): session closed for user core May 10 00:41:37.286306 systemd[1]: sshd@11-10.0.0.68:22-10.0.0.1:33126.service: Deactivated successfully. May 10 00:41:37.287262 systemd-logind[1299]: Session 12 logged out. Waiting for processes to exit. May 10 00:41:37.287299 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:41:37.288273 systemd-logind[1299]: Removed session 12. May 10 00:41:42.286847 systemd[1]: Started sshd@12-10.0.0.68:22-10.0.0.1:33142.service. May 10 00:41:42.325763 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:42.326786 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:42.329965 systemd-logind[1299]: New session 13 of user core. May 10 00:41:42.330753 systemd[1]: Started session-13.scope. May 10 00:41:42.436948 sshd[3709]: pam_unix(sshd:session): session closed for user core May 10 00:41:42.439282 systemd[1]: sshd@12-10.0.0.68:22-10.0.0.1:33142.service: Deactivated successfully. May 10 00:41:42.440350 systemd-logind[1299]: Session 13 logged out. Waiting for processes to exit. May 10 00:41:42.440462 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:41:42.441385 systemd-logind[1299]: Removed session 13. May 10 00:41:47.440819 systemd[1]: Started sshd@13-10.0.0.68:22-10.0.0.1:54136.service. May 10 00:41:47.485920 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 54136 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:47.487336 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:47.491496 systemd-logind[1299]: New session 14 of user core. May 10 00:41:47.492740 systemd[1]: Started session-14.scope. May 10 00:41:47.604996 sshd[3723]: pam_unix(sshd:session): session closed for user core May 10 00:41:47.607673 systemd[1]: Started sshd@14-10.0.0.68:22-10.0.0.1:54140.service. May 10 00:41:47.608398 systemd[1]: sshd@13-10.0.0.68:22-10.0.0.1:54136.service: Deactivated successfully. May 10 00:41:47.609499 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:41:47.609963 systemd-logind[1299]: Session 14 logged out. Waiting for processes to exit. May 10 00:41:47.610929 systemd-logind[1299]: Removed session 14. May 10 00:41:47.648349 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 54140 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:47.649564 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:47.653321 systemd-logind[1299]: New session 15 of user core. May 10 00:41:47.654129 systemd[1]: Started session-15.scope. May 10 00:41:47.921096 sshd[3735]: pam_unix(sshd:session): session closed for user core May 10 00:41:47.923957 systemd[1]: Started sshd@15-10.0.0.68:22-10.0.0.1:54144.service. May 10 00:41:47.924527 systemd[1]: sshd@14-10.0.0.68:22-10.0.0.1:54140.service: Deactivated successfully. May 10 00:41:47.925768 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:41:47.925808 systemd-logind[1299]: Session 15 logged out. Waiting for processes to exit. May 10 00:41:47.926949 systemd-logind[1299]: Removed session 15. May 10 00:41:47.968250 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 54144 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:47.969411 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:47.973595 systemd-logind[1299]: New session 16 of user core. May 10 00:41:47.974742 systemd[1]: Started session-16.scope. May 10 00:41:49.497345 sshd[3747]: pam_unix(sshd:session): session closed for user core May 10 00:41:49.500080 systemd[1]: Started sshd@16-10.0.0.68:22-10.0.0.1:54150.service. May 10 00:41:49.504965 systemd[1]: sshd@15-10.0.0.68:22-10.0.0.1:54144.service: Deactivated successfully. May 10 00:41:49.505791 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:41:49.508158 systemd-logind[1299]: Session 16 logged out. Waiting for processes to exit. May 10 00:41:49.509618 systemd-logind[1299]: Removed session 16. May 10 00:41:49.550846 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 54150 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:49.552397 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:49.557211 systemd-logind[1299]: New session 17 of user core. May 10 00:41:49.557897 systemd[1]: Started session-17.scope. May 10 00:41:49.805703 sshd[3766]: pam_unix(sshd:session): session closed for user core May 10 00:41:49.811088 systemd[1]: Started sshd@17-10.0.0.68:22-10.0.0.1:54158.service. May 10 00:41:49.814966 systemd[1]: sshd@16-10.0.0.68:22-10.0.0.1:54150.service: Deactivated successfully. May 10 00:41:49.816936 systemd-logind[1299]: Session 17 logged out. Waiting for processes to exit. May 10 00:41:49.817088 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:41:49.819762 systemd-logind[1299]: Removed session 17. May 10 00:41:49.856694 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 54158 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:49.858183 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:49.862484 systemd-logind[1299]: New session 18 of user core. May 10 00:41:49.863283 systemd[1]: Started session-18.scope. May 10 00:41:49.973890 sshd[3781]: pam_unix(sshd:session): session closed for user core May 10 00:41:49.976651 systemd[1]: sshd@17-10.0.0.68:22-10.0.0.1:54158.service: Deactivated successfully. May 10 00:41:49.977701 systemd-logind[1299]: Session 18 logged out. Waiting for processes to exit. May 10 00:41:49.977849 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:41:49.978855 systemd-logind[1299]: Removed session 18. May 10 00:41:54.978127 systemd[1]: Started sshd@18-10.0.0.68:22-10.0.0.1:54160.service. May 10 00:41:55.020212 sshd[3801]: Accepted publickey for core from 10.0.0.1 port 54160 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:41:55.021953 sshd[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:41:55.026250 systemd-logind[1299]: New session 19 of user core. May 10 00:41:55.026959 systemd[1]: Started session-19.scope. May 10 00:41:55.144517 sshd[3801]: pam_unix(sshd:session): session closed for user core May 10 00:41:55.147403 systemd[1]: sshd@18-10.0.0.68:22-10.0.0.1:54160.service: Deactivated successfully. May 10 00:41:55.148668 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:41:55.148763 systemd-logind[1299]: Session 19 logged out. Waiting for processes to exit. May 10 00:41:55.149977 systemd-logind[1299]: Removed session 19. May 10 00:42:00.148434 systemd[1]: Started sshd@19-10.0.0.68:22-10.0.0.1:54250.service. May 10 00:42:00.187529 sshd[3819]: Accepted publickey for core from 10.0.0.1 port 54250 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:00.188838 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:00.192205 systemd-logind[1299]: New session 20 of user core. May 10 00:42:00.193249 systemd[1]: Started session-20.scope. May 10 00:42:00.295107 sshd[3819]: pam_unix(sshd:session): session closed for user core May 10 00:42:00.297182 systemd[1]: sshd@19-10.0.0.68:22-10.0.0.1:54250.service: Deactivated successfully. May 10 00:42:00.298421 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:42:00.298777 systemd-logind[1299]: Session 20 logged out. Waiting for processes to exit. May 10 00:42:00.299602 systemd-logind[1299]: Removed session 20. May 10 00:42:05.299198 systemd[1]: Started sshd@20-10.0.0.68:22-10.0.0.1:54262.service. May 10 00:42:05.339488 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 54262 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:05.340936 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:05.344859 systemd-logind[1299]: New session 21 of user core. May 10 00:42:05.345634 systemd[1]: Started session-21.scope. May 10 00:42:05.451543 sshd[3834]: pam_unix(sshd:session): session closed for user core May 10 00:42:05.454290 systemd[1]: sshd@20-10.0.0.68:22-10.0.0.1:54262.service: Deactivated successfully. May 10 00:42:05.455496 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:42:05.455948 systemd-logind[1299]: Session 21 logged out. Waiting for processes to exit. May 10 00:42:05.456909 systemd-logind[1299]: Removed session 21. May 10 00:42:10.455661 systemd[1]: Started sshd@21-10.0.0.68:22-10.0.0.1:32870.service. May 10 00:42:10.496601 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 32870 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:10.498260 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:10.502300 systemd-logind[1299]: New session 22 of user core. May 10 00:42:10.503182 systemd[1]: Started session-22.scope. May 10 00:42:10.616206 sshd[3850]: pam_unix(sshd:session): session closed for user core May 10 00:42:10.618958 systemd[1]: Started sshd@22-10.0.0.68:22-10.0.0.1:32884.service. May 10 00:42:10.619503 systemd[1]: sshd@21-10.0.0.68:22-10.0.0.1:32870.service: Deactivated successfully. May 10 00:42:10.620454 systemd-logind[1299]: Session 22 logged out. Waiting for processes to exit. May 10 00:42:10.620484 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:42:10.621573 systemd-logind[1299]: Removed session 22. May 10 00:42:10.659195 sshd[3862]: Accepted publickey for core from 10.0.0.1 port 32884 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:10.660571 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:10.664757 systemd-logind[1299]: New session 23 of user core. May 10 00:42:10.665774 systemd[1]: Started session-23.scope. May 10 00:42:12.175123 env[1307]: time="2025-05-10T00:42:12.175056819Z" level=info msg="StopContainer for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" with timeout 30 (s)" May 10 00:42:12.176059 env[1307]: time="2025-05-10T00:42:12.176031623Z" level=info msg="Stop container \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" with signal terminated" May 10 00:42:12.198350 env[1307]: time="2025-05-10T00:42:12.198279058Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:42:12.203545 env[1307]: time="2025-05-10T00:42:12.203488539Z" level=info msg="StopContainer for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" with timeout 2 (s)" May 10 00:42:12.203769 env[1307]: time="2025-05-10T00:42:12.203744116Z" level=info msg="Stop container \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" with signal terminated" May 10 00:42:12.210031 systemd-networkd[1086]: lxc_health: Link DOWN May 10 00:42:12.210346 systemd-networkd[1086]: lxc_health: Lost carrier May 10 00:42:12.212450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b-rootfs.mount: Deactivated successfully. May 10 00:42:12.214983 env[1307]: time="2025-05-10T00:42:12.214937400Z" level=info msg="shim disconnected" id=537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b May 10 00:42:12.215091 env[1307]: time="2025-05-10T00:42:12.214992004Z" level=warning msg="cleaning up after shim disconnected" id=537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b namespace=k8s.io May 10 00:42:12.215091 env[1307]: time="2025-05-10T00:42:12.215009057Z" level=info msg="cleaning up dead shim" May 10 00:42:12.221683 env[1307]: time="2025-05-10T00:42:12.221623550Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3922 runtime=io.containerd.runc.v2\n" May 10 00:42:12.224421 env[1307]: time="2025-05-10T00:42:12.224356530Z" level=info msg="StopContainer for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" returns successfully" May 10 00:42:12.225131 env[1307]: time="2025-05-10T00:42:12.225094043Z" level=info msg="StopPodSandbox for \"46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7\"" May 10 00:42:12.225197 env[1307]: time="2025-05-10T00:42:12.225181078Z" level=info msg="Container to stop \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:12.228127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7-shm.mount: Deactivated successfully. May 10 00:42:12.261948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7-rootfs.mount: Deactivated successfully. May 10 00:42:12.267290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186-rootfs.mount: Deactivated successfully. May 10 00:42:12.271290 env[1307]: time="2025-05-10T00:42:12.271221519Z" level=info msg="shim disconnected" id=46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7 May 10 00:42:12.271290 env[1307]: time="2025-05-10T00:42:12.271277775Z" level=warning msg="cleaning up after shim disconnected" id=46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7 namespace=k8s.io May 10 00:42:12.271471 env[1307]: time="2025-05-10T00:42:12.271292955Z" level=info msg="cleaning up dead shim" May 10 00:42:12.271471 env[1307]: time="2025-05-10T00:42:12.271281502Z" level=info msg="shim disconnected" id=99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186 May 10 00:42:12.271471 env[1307]: time="2025-05-10T00:42:12.271315036Z" level=warning msg="cleaning up after shim disconnected" id=99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186 namespace=k8s.io May 10 00:42:12.271471 env[1307]: time="2025-05-10T00:42:12.271324194Z" level=info msg="cleaning up dead shim" May 10 00:42:12.279476 env[1307]: time="2025-05-10T00:42:12.279409576Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3970 runtime=io.containerd.runc.v2\n" May 10 00:42:12.279476 env[1307]: time="2025-05-10T00:42:12.279407732Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3971 runtime=io.containerd.runc.v2\n" May 10 00:42:12.280136 env[1307]: time="2025-05-10T00:42:12.280096924Z" level=info msg="TearDown network for sandbox \"46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7\" successfully" May 10 00:42:12.280172 env[1307]: time="2025-05-10T00:42:12.280132562Z" level=info msg="StopPodSandbox for \"46dd5daa1247149682030fc64898699ab638b44efe03cdde8cd157501f05f5e7\" returns successfully" May 10 00:42:12.282140 env[1307]: time="2025-05-10T00:42:12.282095115Z" level=info msg="StopContainer for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" returns successfully" May 10 00:42:12.282457 env[1307]: time="2025-05-10T00:42:12.282417298Z" level=info msg="StopPodSandbox for \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\"" May 10 00:42:12.282524 env[1307]: time="2025-05-10T00:42:12.282490047Z" level=info msg="Container to stop \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:12.282524 env[1307]: time="2025-05-10T00:42:12.282509192Z" level=info msg="Container to stop \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:12.282593 env[1307]: time="2025-05-10T00:42:12.282521837Z" level=info msg="Container to stop \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:12.282593 env[1307]: time="2025-05-10T00:42:12.282535102Z" level=info msg="Container to stop \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:12.282593 env[1307]: time="2025-05-10T00:42:12.282547275Z" level=info msg="Container to stop \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:12.312194 env[1307]: time="2025-05-10T00:42:12.312127280Z" level=info msg="shim disconnected" id=9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0 May 10 00:42:12.312499 env[1307]: time="2025-05-10T00:42:12.312460574Z" level=warning msg="cleaning up after shim disconnected" id=9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0 namespace=k8s.io May 10 00:42:12.312499 env[1307]: time="2025-05-10T00:42:12.312486493Z" level=info msg="cleaning up dead shim" May 10 00:42:12.321981 env[1307]: time="2025-05-10T00:42:12.321923226Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4016 runtime=io.containerd.runc.v2\n" May 10 00:42:12.322333 env[1307]: time="2025-05-10T00:42:12.322301626Z" level=info msg="TearDown network for sandbox \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" successfully" May 10 00:42:12.322408 env[1307]: time="2025-05-10T00:42:12.322331332Z" level=info msg="StopPodSandbox for \"9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0\" returns successfully" May 10 00:42:12.374916 kubelet[2175]: I0510 00:42:12.374863 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2058d697-709e-47f2-9e0b-7d3b8998b321-cilium-config-path\") pod \"2058d697-709e-47f2-9e0b-7d3b8998b321\" (UID: \"2058d697-709e-47f2-9e0b-7d3b8998b321\") " May 10 00:42:12.374916 kubelet[2175]: I0510 00:42:12.374915 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkxr9\" (UniqueName: \"kubernetes.io/projected/2058d697-709e-47f2-9e0b-7d3b8998b321-kube-api-access-lkxr9\") pod \"2058d697-709e-47f2-9e0b-7d3b8998b321\" (UID: \"2058d697-709e-47f2-9e0b-7d3b8998b321\") " May 10 00:42:12.377685 kubelet[2175]: I0510 00:42:12.377645 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2058d697-709e-47f2-9e0b-7d3b8998b321-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2058d697-709e-47f2-9e0b-7d3b8998b321" (UID: "2058d697-709e-47f2-9e0b-7d3b8998b321"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:42:12.378189 kubelet[2175]: I0510 00:42:12.378119 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2058d697-709e-47f2-9e0b-7d3b8998b321-kube-api-access-lkxr9" (OuterVolumeSpecName: "kube-api-access-lkxr9") pod "2058d697-709e-47f2-9e0b-7d3b8998b321" (UID: "2058d697-709e-47f2-9e0b-7d3b8998b321"). InnerVolumeSpecName "kube-api-access-lkxr9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:42:12.476267 kubelet[2175]: I0510 00:42:12.476119 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-etc-cni-netd\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.476267 kubelet[2175]: I0510 00:42:12.476178 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cni-path\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.476267 kubelet[2175]: I0510 00:42:12.476203 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-kernel\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.476267 kubelet[2175]: I0510 00:42:12.476229 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hostproc\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.476512 kubelet[2175]: I0510 00:42:12.476279 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.476512 kubelet[2175]: I0510 00:42:12.476330 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cni-path" (OuterVolumeSpecName: "cni-path") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.476512 kubelet[2175]: I0510 00:42:12.476298 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.476512 kubelet[2175]: I0510 00:42:12.476406 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hostproc" (OuterVolumeSpecName: "hostproc") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.477430 kubelet[2175]: I0510 00:42:12.477409 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwd79\" (UniqueName: \"kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-kube-api-access-mwd79\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477479 kubelet[2175]: I0510 00:42:12.477440 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1382a6d9-ea67-4e19-ba10-0fc67a849a35-clustermesh-secrets\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477789 kubelet[2175]: I0510 00:42:12.477771 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-cgroup\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477846 kubelet[2175]: I0510 00:42:12.477792 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-net\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477846 kubelet[2175]: I0510 00:42:12.477805 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-lib-modules\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477846 kubelet[2175]: I0510 00:42:12.477810 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.477846 kubelet[2175]: I0510 00:42:12.477825 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hubble-tls\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477846 kubelet[2175]: I0510 00:42:12.477842 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-run\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477966 kubelet[2175]: I0510 00:42:12.477842 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.477966 kubelet[2175]: I0510 00:42:12.477854 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-bpf-maps\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477966 kubelet[2175]: I0510 00:42:12.477871 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-config-path\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477966 kubelet[2175]: I0510 00:42:12.477883 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-xtables-lock\") pod \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\" (UID: \"1382a6d9-ea67-4e19-ba10-0fc67a849a35\") " May 10 00:42:12.477966 kubelet[2175]: I0510 00:42:12.477914 2175 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hostproc\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.477966 kubelet[2175]: I0510 00:42:12.477942 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.477952 2175 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lkxr9\" (UniqueName: \"kubernetes.io/projected/2058d697-709e-47f2-9e0b-7d3b8998b321-kube-api-access-lkxr9\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.477959 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.477968 2175 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.477974 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2058d697-709e-47f2-9e0b-7d3b8998b321-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.477984 2175 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cni-path\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.477991 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.478125 kubelet[2175]: I0510 00:42:12.478008 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.478282 kubelet[2175]: I0510 00:42:12.478034 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.478282 kubelet[2175]: I0510 00:42:12.478064 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.478282 kubelet[2175]: I0510 00:42:12.478106 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:12.480029 kubelet[2175]: I0510 00:42:12.479992 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:42:12.480581 kubelet[2175]: I0510 00:42:12.480562 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-kube-api-access-mwd79" (OuterVolumeSpecName: "kube-api-access-mwd79") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "kube-api-access-mwd79". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:42:12.481048 kubelet[2175]: I0510 00:42:12.481017 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1382a6d9-ea67-4e19-ba10-0fc67a849a35-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:42:12.481193 kubelet[2175]: I0510 00:42:12.481173 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1382a6d9-ea67-4e19-ba10-0fc67a849a35" (UID: "1382a6d9-ea67-4e19-ba10-0fc67a849a35"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:42:12.578675 kubelet[2175]: I0510 00:42:12.578612 2175 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mwd79\" (UniqueName: \"kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-kube-api-access-mwd79\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578675 kubelet[2175]: I0510 00:42:12.578659 2175 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1382a6d9-ea67-4e19-ba10-0fc67a849a35-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578675 kubelet[2175]: I0510 00:42:12.578670 2175 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-lib-modules\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578675 kubelet[2175]: I0510 00:42:12.578678 2175 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1382a6d9-ea67-4e19-ba10-0fc67a849a35-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578675 kubelet[2175]: I0510 00:42:12.578685 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-run\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578963 kubelet[2175]: I0510 00:42:12.578695 2175 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578963 kubelet[2175]: I0510 00:42:12.578702 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1382a6d9-ea67-4e19-ba10-0fc67a849a35-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.578963 kubelet[2175]: I0510 00:42:12.578708 2175 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1382a6d9-ea67-4e19-ba10-0fc67a849a35-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 10 00:42:12.852773 kubelet[2175]: I0510 00:42:12.852739 2175 scope.go:117] "RemoveContainer" containerID="99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186" May 10 00:42:12.854111 env[1307]: time="2025-05-10T00:42:12.854063541Z" level=info msg="RemoveContainer for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\"" May 10 00:42:12.861285 env[1307]: time="2025-05-10T00:42:12.861236197Z" level=info msg="RemoveContainer for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" returns successfully" May 10 00:42:12.862421 kubelet[2175]: I0510 00:42:12.862022 2175 scope.go:117] "RemoveContainer" containerID="26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a" May 10 00:42:12.863036 env[1307]: time="2025-05-10T00:42:12.863001485Z" level=info msg="RemoveContainer for \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\"" May 10 00:42:12.866442 env[1307]: time="2025-05-10T00:42:12.866408977Z" level=info msg="RemoveContainer for \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\" returns successfully" May 10 00:42:12.866645 kubelet[2175]: I0510 00:42:12.866606 2175 scope.go:117] "RemoveContainer" containerID="fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07" May 10 00:42:12.867746 env[1307]: time="2025-05-10T00:42:12.867712708Z" level=info msg="RemoveContainer for \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\"" May 10 00:42:12.872451 env[1307]: time="2025-05-10T00:42:12.872395928Z" level=info msg="RemoveContainer for \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\" returns successfully" May 10 00:42:12.872626 kubelet[2175]: I0510 00:42:12.872609 2175 scope.go:117] "RemoveContainer" containerID="0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28" May 10 00:42:12.873677 env[1307]: time="2025-05-10T00:42:12.873643340Z" level=info msg="RemoveContainer for \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\"" May 10 00:42:12.878895 env[1307]: time="2025-05-10T00:42:12.878854244Z" level=info msg="RemoveContainer for \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\" returns successfully" May 10 00:42:12.879094 kubelet[2175]: I0510 00:42:12.879043 2175 scope.go:117] "RemoveContainer" containerID="2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122" May 10 00:42:12.880491 env[1307]: time="2025-05-10T00:42:12.880450481Z" level=info msg="RemoveContainer for \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\"" May 10 00:42:12.884350 env[1307]: time="2025-05-10T00:42:12.884308921Z" level=info msg="RemoveContainer for \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\" returns successfully" May 10 00:42:12.884581 kubelet[2175]: I0510 00:42:12.884539 2175 scope.go:117] "RemoveContainer" containerID="99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186" May 10 00:42:12.884909 env[1307]: time="2025-05-10T00:42:12.884795697Z" level=error msg="ContainerStatus for \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\": not found" May 10 00:42:12.885042 kubelet[2175]: E0510 00:42:12.885013 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\": not found" containerID="99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186" May 10 00:42:12.885143 kubelet[2175]: I0510 00:42:12.885046 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186"} err="failed to get container status \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\": rpc error: code = NotFound desc = an error occurred when try to find container \"99d0cdf725872391eb067429b60c45c5490f46f07335e8db231a3b4f53ef0186\": not found" May 10 00:42:12.885143 kubelet[2175]: I0510 00:42:12.885142 2175 scope.go:117] "RemoveContainer" containerID="26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a" May 10 00:42:12.885310 env[1307]: time="2025-05-10T00:42:12.885270241Z" level=error msg="ContainerStatus for \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\": not found" May 10 00:42:12.885476 kubelet[2175]: E0510 00:42:12.885427 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\": not found" containerID="26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a" May 10 00:42:12.885476 kubelet[2175]: I0510 00:42:12.885465 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a"} err="failed to get container status \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\": rpc error: code = NotFound desc = an error occurred when try to find container \"26a02c55624f35a7b833d2d4172ae2c0c34840b654e589115306329dea6ee45a\": not found" May 10 00:42:12.885566 kubelet[2175]: I0510 00:42:12.885491 2175 scope.go:117] "RemoveContainer" containerID="fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07" May 10 00:42:12.885728 env[1307]: time="2025-05-10T00:42:12.885678547Z" level=error msg="ContainerStatus for \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\": not found" May 10 00:42:12.885835 kubelet[2175]: E0510 00:42:12.885813 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\": not found" containerID="fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07" May 10 00:42:12.885888 kubelet[2175]: I0510 00:42:12.885835 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07"} err="failed to get container status \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdd0cc65b8d88953175164dec748f6f0bc77ea2c5ce0ee7c7a0c0ae669f73c07\": not found" May 10 00:42:12.885888 kubelet[2175]: I0510 00:42:12.885851 2175 scope.go:117] "RemoveContainer" containerID="0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28" May 10 00:42:12.886006 env[1307]: time="2025-05-10T00:42:12.885971675Z" level=error msg="ContainerStatus for \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\": not found" May 10 00:42:12.886215 kubelet[2175]: E0510 00:42:12.886180 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\": not found" containerID="0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28" May 10 00:42:12.886264 kubelet[2175]: I0510 00:42:12.886211 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28"} err="failed to get container status \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d7bd052286d751155c23b351bbe0f9c8f6bb0d601090a73d615e49a9d980a28\": not found" May 10 00:42:12.886264 kubelet[2175]: I0510 00:42:12.886232 2175 scope.go:117] "RemoveContainer" containerID="2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122" May 10 00:42:12.886540 env[1307]: time="2025-05-10T00:42:12.886473900Z" level=error msg="ContainerStatus for \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\": not found" May 10 00:42:12.886662 kubelet[2175]: E0510 00:42:12.886643 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\": not found" containerID="2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122" May 10 00:42:12.886692 kubelet[2175]: I0510 00:42:12.886665 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122"} err="failed to get container status \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\": rpc error: code = NotFound desc = an error occurred when try to find container \"2cfd51bfcef44fd1d09c965885acb628f507f21a634f00e2d5dfe37f63024122\": not found" May 10 00:42:12.886692 kubelet[2175]: I0510 00:42:12.886679 2175 scope.go:117] "RemoveContainer" containerID="537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b" May 10 00:42:12.887607 env[1307]: time="2025-05-10T00:42:12.887577180Z" level=info msg="RemoveContainer for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\"" May 10 00:42:12.891003 env[1307]: time="2025-05-10T00:42:12.890960355Z" level=info msg="RemoveContainer for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" returns successfully" May 10 00:42:12.891181 kubelet[2175]: I0510 00:42:12.891145 2175 scope.go:117] "RemoveContainer" containerID="537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b" May 10 00:42:12.891383 env[1307]: time="2025-05-10T00:42:12.891325791Z" level=error msg="ContainerStatus for \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\": not found" May 10 00:42:12.891504 kubelet[2175]: E0510 00:42:12.891484 2175 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\": not found" containerID="537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b" May 10 00:42:12.891555 kubelet[2175]: I0510 00:42:12.891519 2175 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b"} err="failed to get container status \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\": rpc error: code = NotFound desc = an error occurred when try to find container \"537bf3b854794ce58ac1a032edfb462efa921c0c577e61348ac6557ff3de373b\": not found" May 10 00:42:13.173193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0-rootfs.mount: Deactivated successfully. May 10 00:42:13.173414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b3b2329377507f09303442a3ddcdcf2bb7b38fcc937a0d5491e95af99f1e5c0-shm.mount: Deactivated successfully. May 10 00:42:13.173540 systemd[1]: var-lib-kubelet-pods-1382a6d9\x2dea67\x2d4e19\x2dba10\x2d0fc67a849a35-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:42:13.173710 systemd[1]: var-lib-kubelet-pods-1382a6d9\x2dea67\x2d4e19\x2dba10\x2d0fc67a849a35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwd79.mount: Deactivated successfully. May 10 00:42:13.173926 systemd[1]: var-lib-kubelet-pods-1382a6d9\x2dea67\x2d4e19\x2dba10\x2d0fc67a849a35-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:42:13.174150 systemd[1]: var-lib-kubelet-pods-2058d697\x2d709e\x2d47f2\x2d9e0b\x2d7d3b8998b321-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlkxr9.mount: Deactivated successfully. May 10 00:42:13.689201 kubelet[2175]: I0510 00:42:13.689144 2175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" path="/var/lib/kubelet/pods/1382a6d9-ea67-4e19-ba10-0fc67a849a35/volumes" May 10 00:42:13.689872 kubelet[2175]: I0510 00:42:13.689848 2175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2058d697-709e-47f2-9e0b-7d3b8998b321" path="/var/lib/kubelet/pods/2058d697-709e-47f2-9e0b-7d3b8998b321/volumes" May 10 00:42:13.736456 kubelet[2175]: E0510 00:42:13.736417 2175 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:42:13.970429 sshd[3862]: pam_unix(sshd:session): session closed for user core May 10 00:42:13.973624 systemd[1]: Started sshd@23-10.0.0.68:22-10.0.0.1:32894.service. May 10 00:42:13.974165 systemd[1]: sshd@22-10.0.0.68:22-10.0.0.1:32884.service: Deactivated successfully. May 10 00:42:13.975493 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:42:13.976181 systemd-logind[1299]: Session 23 logged out. Waiting for processes to exit. May 10 00:42:13.977100 systemd-logind[1299]: Removed session 23. May 10 00:42:14.018056 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 32894 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:14.019395 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:14.022695 systemd-logind[1299]: New session 24 of user core. May 10 00:42:14.023604 systemd[1]: Started session-24.scope. May 10 00:42:14.714618 sshd[4034]: pam_unix(sshd:session): session closed for user core May 10 00:42:14.716240 systemd[1]: Started sshd@24-10.0.0.68:22-10.0.0.1:32908.service. May 10 00:42:14.726494 systemd-logind[1299]: Session 24 logged out. Waiting for processes to exit. May 10 00:42:14.728720 kubelet[2175]: I0510 00:42:14.727470 2175 topology_manager.go:215] "Topology Admit Handler" podUID="f6828a48-e6da-432d-b717-65d3e1532ead" podNamespace="kube-system" podName="cilium-5hbl8" May 10 00:42:14.728720 kubelet[2175]: E0510 00:42:14.727543 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2058d697-709e-47f2-9e0b-7d3b8998b321" containerName="cilium-operator" May 10 00:42:14.728720 kubelet[2175]: E0510 00:42:14.727553 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" containerName="mount-cgroup" May 10 00:42:14.728720 kubelet[2175]: E0510 00:42:14.727560 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" containerName="mount-bpf-fs" May 10 00:42:14.728720 kubelet[2175]: E0510 00:42:14.727567 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" containerName="apply-sysctl-overwrites" May 10 00:42:14.728720 kubelet[2175]: E0510 00:42:14.727573 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" containerName="clean-cilium-state" May 10 00:42:14.728720 kubelet[2175]: E0510 00:42:14.727581 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" containerName="cilium-agent" May 10 00:42:14.728720 kubelet[2175]: I0510 00:42:14.727621 2175 memory_manager.go:354] "RemoveStaleState removing state" podUID="2058d697-709e-47f2-9e0b-7d3b8998b321" containerName="cilium-operator" May 10 00:42:14.728720 kubelet[2175]: I0510 00:42:14.727629 2175 memory_manager.go:354] "RemoveStaleState removing state" podUID="1382a6d9-ea67-4e19-ba10-0fc67a849a35" containerName="cilium-agent" May 10 00:42:14.729771 systemd[1]: sshd@23-10.0.0.68:22-10.0.0.1:32894.service: Deactivated successfully. May 10 00:42:14.730761 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:42:14.732992 systemd-logind[1299]: Removed session 24. May 10 00:42:14.769051 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 32908 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:14.770678 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:14.775236 systemd-logind[1299]: New session 25 of user core. May 10 00:42:14.775543 systemd[1]: Started session-25.scope. May 10 00:42:14.790549 kubelet[2175]: I0510 00:42:14.790504 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-cgroup\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790549 kubelet[2175]: I0510 00:42:14.790543 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-run\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790802 kubelet[2175]: I0510 00:42:14.790570 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-kernel\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790802 kubelet[2175]: I0510 00:42:14.790584 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-config-path\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790802 kubelet[2175]: I0510 00:42:14.790600 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-hostproc\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790802 kubelet[2175]: I0510 00:42:14.790612 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-etc-cni-netd\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790802 kubelet[2175]: I0510 00:42:14.790627 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-lib-modules\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790802 kubelet[2175]: I0510 00:42:14.790640 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-clustermesh-secrets\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790990 kubelet[2175]: I0510 00:42:14.790655 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-ipsec-secrets\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790990 kubelet[2175]: I0510 00:42:14.790667 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-hubble-tls\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790990 kubelet[2175]: I0510 00:42:14.790681 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cni-path\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790990 kubelet[2175]: I0510 00:42:14.790694 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-xtables-lock\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790990 kubelet[2175]: I0510 00:42:14.790707 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87v9\" (UniqueName: \"kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-kube-api-access-t87v9\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.790990 kubelet[2175]: I0510 00:42:14.790721 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-bpf-maps\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.791181 kubelet[2175]: I0510 00:42:14.790734 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-net\") pod \"cilium-5hbl8\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " pod="kube-system/cilium-5hbl8" May 10 00:42:14.912997 sshd[4046]: pam_unix(sshd:session): session closed for user core May 10 00:42:14.920198 kubelet[2175]: E0510 00:42:14.917987 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:14.920426 env[1307]: time="2025-05-10T00:42:14.919725993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hbl8,Uid:f6828a48-e6da-432d-b717-65d3e1532ead,Namespace:kube-system,Attempt:0,}" May 10 00:42:14.919756 systemd[1]: Started sshd@25-10.0.0.68:22-10.0.0.1:32922.service. May 10 00:42:14.922588 systemd[1]: sshd@24-10.0.0.68:22-10.0.0.1:32908.service: Deactivated successfully. May 10 00:42:14.925167 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:42:14.925869 systemd-logind[1299]: Session 25 logged out. Waiting for processes to exit. May 10 00:42:14.927536 systemd-logind[1299]: Removed session 25. May 10 00:42:14.946612 env[1307]: time="2025-05-10T00:42:14.946500154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:42:14.946785 env[1307]: time="2025-05-10T00:42:14.946631423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:42:14.946785 env[1307]: time="2025-05-10T00:42:14.946669105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:42:14.947009 env[1307]: time="2025-05-10T00:42:14.946966991Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f pid=4077 runtime=io.containerd.runc.v2 May 10 00:42:14.974330 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 32922 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:42:14.976880 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:42:14.981953 systemd[1]: Started session-26.scope. May 10 00:42:14.982246 systemd-logind[1299]: New session 26 of user core. May 10 00:42:14.992680 env[1307]: time="2025-05-10T00:42:14.992635475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5hbl8,Uid:f6828a48-e6da-432d-b717-65d3e1532ead,Namespace:kube-system,Attempt:0,} returns sandbox id \"004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f\"" May 10 00:42:14.994095 kubelet[2175]: E0510 00:42:14.994055 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:14.996141 env[1307]: time="2025-05-10T00:42:14.996110164Z" level=info msg="CreateContainer within sandbox \"004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:42:15.010126 env[1307]: time="2025-05-10T00:42:15.010034011Z" level=info msg="CreateContainer within sandbox \"004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c\"" May 10 00:42:15.011771 env[1307]: time="2025-05-10T00:42:15.010802322Z" level=info msg="StartContainer for \"8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c\"" May 10 00:42:15.059074 env[1307]: time="2025-05-10T00:42:15.059013377Z" level=info msg="StartContainer for \"8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c\" returns successfully" May 10 00:42:15.111181 env[1307]: time="2025-05-10T00:42:15.111122745Z" level=info msg="shim disconnected" id=8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c May 10 00:42:15.111181 env[1307]: time="2025-05-10T00:42:15.111170737Z" level=warning msg="cleaning up after shim disconnected" id=8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c namespace=k8s.io May 10 00:42:15.111181 env[1307]: time="2025-05-10T00:42:15.111180335Z" level=info msg="cleaning up dead shim" May 10 00:42:15.119273 env[1307]: time="2025-05-10T00:42:15.119230192Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4171 runtime=io.containerd.runc.v2\n" May 10 00:42:15.861676 env[1307]: time="2025-05-10T00:42:15.861631964Z" level=info msg="StopPodSandbox for \"004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f\"" May 10 00:42:15.861912 env[1307]: time="2025-05-10T00:42:15.861695595Z" level=info msg="Container to stop \"8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:42:15.885787 env[1307]: time="2025-05-10T00:42:15.885732022Z" level=info msg="shim disconnected" id=004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f May 10 00:42:15.886012 env[1307]: time="2025-05-10T00:42:15.885791645Z" level=warning msg="cleaning up after shim disconnected" id=004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f namespace=k8s.io May 10 00:42:15.886012 env[1307]: time="2025-05-10T00:42:15.885808147Z" level=info msg="cleaning up dead shim" May 10 00:42:15.893193 env[1307]: time="2025-05-10T00:42:15.893101795Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4203 runtime=io.containerd.runc.v2\n" May 10 00:42:15.893582 env[1307]: time="2025-05-10T00:42:15.893545219Z" level=info msg="TearDown network for sandbox \"004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f\" successfully" May 10 00:42:15.893649 env[1307]: time="2025-05-10T00:42:15.893575006Z" level=info msg="StopPodSandbox for \"004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f\" returns successfully" May 10 00:42:15.897035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-004862a732f1ed6594a9cd2862367178a415a7ea9fd3548de1b21d7458f8f29f-shm.mount: Deactivated successfully. May 10 00:42:15.998863 kubelet[2175]: I0510 00:42:15.998780 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-net\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.998863 kubelet[2175]: I0510 00:42:15.998832 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-kernel\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.998863 kubelet[2175]: I0510 00:42:15.998849 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-xtables-lock\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.998863 kubelet[2175]: I0510 00:42:15.998873 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-ipsec-secrets\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999565 kubelet[2175]: I0510 00:42:15.998887 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-run\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999565 kubelet[2175]: I0510 00:42:15.998901 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-bpf-maps\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999565 kubelet[2175]: I0510 00:42:15.998916 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-etc-cni-netd\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999565 kubelet[2175]: I0510 00:42:15.998928 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-cgroup\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999565 kubelet[2175]: I0510 00:42:15.998943 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-clustermesh-secrets\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999565 kubelet[2175]: I0510 00:42:15.998959 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cni-path\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999795 kubelet[2175]: I0510 00:42:15.998988 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-config-path\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999795 kubelet[2175]: I0510 00:42:15.999008 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-hubble-tls\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999795 kubelet[2175]: I0510 00:42:15.999024 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t87v9\" (UniqueName: \"kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-kube-api-access-t87v9\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:15.999795 kubelet[2175]: I0510 00:42:15.998953 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:15.999795 kubelet[2175]: I0510 00:42:15.998953 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:15.999992 kubelet[2175]: I0510 00:42:15.998983 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:15.999992 kubelet[2175]: I0510 00:42:15.999005 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:15.999992 kubelet[2175]: I0510 00:42:15.999001 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:15.999992 kubelet[2175]: I0510 00:42:15.999025 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:15.999992 kubelet[2175]: I0510 00:42:15.999094 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-hostproc" (OuterVolumeSpecName: "hostproc") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:16.000240 kubelet[2175]: I0510 00:42:15.999132 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:16.000240 kubelet[2175]: I0510 00:42:15.999661 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cni-path" (OuterVolumeSpecName: "cni-path") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:16.000240 kubelet[2175]: I0510 00:42:15.999036 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-hostproc\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:16.000240 kubelet[2175]: I0510 00:42:16.000155 2175 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-lib-modules\") pod \"f6828a48-e6da-432d-b717-65d3e1532ead\" (UID: \"f6828a48-e6da-432d-b717-65d3e1532ead\") " May 10 00:42:16.000411 kubelet[2175]: I0510 00:42:16.000331 2175 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cni-path\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000411 kubelet[2175]: I0510 00:42:16.000343 2175 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-hostproc\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000411 kubelet[2175]: I0510 00:42:16.000354 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000411 kubelet[2175]: I0510 00:42:16.000374 2175 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000411 kubelet[2175]: I0510 00:42:16.000383 2175 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000411 kubelet[2175]: I0510 00:42:16.000390 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-run\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000662 kubelet[2175]: I0510 00:42:16.000520 2175 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000662 kubelet[2175]: I0510 00:42:16.000529 2175 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.000662 kubelet[2175]: I0510 00:42:16.000536 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.003462 kubelet[2175]: I0510 00:42:16.002358 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:42:16.003992 systemd[1]: var-lib-kubelet-pods-f6828a48\x2de6da\x2d432d\x2db717\x2d65d3e1532ead-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:42:16.004146 systemd[1]: var-lib-kubelet-pods-f6828a48\x2de6da\x2d432d\x2db717\x2d65d3e1532ead-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:42:16.007245 kubelet[2175]: I0510 00:42:16.006972 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:42:16.006437 systemd[1]: var-lib-kubelet-pods-f6828a48\x2de6da\x2d432d\x2db717\x2d65d3e1532ead-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:42:16.008178 kubelet[2175]: I0510 00:42:16.008143 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:42:16.008275 kubelet[2175]: I0510 00:42:16.008228 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:42:16.009825 kubelet[2175]: I0510 00:42:16.009792 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:42:16.010292 kubelet[2175]: I0510 00:42:16.010267 2175 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-kube-api-access-t87v9" (OuterVolumeSpecName: "kube-api-access-t87v9") pod "f6828a48-e6da-432d-b717-65d3e1532ead" (UID: "f6828a48-e6da-432d-b717-65d3e1532ead"). InnerVolumeSpecName "kube-api-access-t87v9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:42:16.011462 systemd[1]: var-lib-kubelet-pods-f6828a48\x2de6da\x2d432d\x2db717\x2d65d3e1532ead-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt87v9.mount: Deactivated successfully. May 10 00:42:16.101776 kubelet[2175]: I0510 00:42:16.101702 2175 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6828a48-e6da-432d-b717-65d3e1532ead-lib-modules\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.101776 kubelet[2175]: I0510 00:42:16.101751 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.101776 kubelet[2175]: I0510 00:42:16.101765 2175 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6828a48-e6da-432d-b717-65d3e1532ead-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.101776 kubelet[2175]: I0510 00:42:16.101777 2175 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6828a48-e6da-432d-b717-65d3e1532ead-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.101776 kubelet[2175]: I0510 00:42:16.101789 2175 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.101776 kubelet[2175]: I0510 00:42:16.101796 2175 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t87v9\" (UniqueName: \"kubernetes.io/projected/f6828a48-e6da-432d-b717-65d3e1532ead-kube-api-access-t87v9\") on node \"localhost\" DevicePath \"\"" May 10 00:42:16.209451 kubelet[2175]: I0510 00:42:16.209257 2175 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:42:16Z","lastTransitionTime":"2025-05-10T00:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:42:16.687864 kubelet[2175]: E0510 00:42:16.687794 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:16.864658 kubelet[2175]: I0510 00:42:16.864622 2175 scope.go:117] "RemoveContainer" containerID="8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c" May 10 00:42:16.866323 env[1307]: time="2025-05-10T00:42:16.866280990Z" level=info msg="RemoveContainer for \"8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c\"" May 10 00:42:16.869949 env[1307]: time="2025-05-10T00:42:16.869913656Z" level=info msg="RemoveContainer for \"8e6f9e4555e396496aa2388e55246460c12be2bdb09e9d948549483ffa135c4c\" returns successfully" May 10 00:42:16.927169 kubelet[2175]: I0510 00:42:16.927094 2175 topology_manager.go:215] "Topology Admit Handler" podUID="118e11f2-23bb-48a2-ac61-c73e24b08e9a" podNamespace="kube-system" podName="cilium-bz278" May 10 00:42:16.927169 kubelet[2175]: E0510 00:42:16.927173 2175 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6828a48-e6da-432d-b717-65d3e1532ead" containerName="mount-cgroup" May 10 00:42:16.927465 kubelet[2175]: I0510 00:42:16.927196 2175 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6828a48-e6da-432d-b717-65d3e1532ead" containerName="mount-cgroup" May 10 00:42:17.008039 kubelet[2175]: I0510 00:42:17.007866 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-xtables-lock\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008039 kubelet[2175]: I0510 00:42:17.007918 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/118e11f2-23bb-48a2-ac61-c73e24b08e9a-cilium-ipsec-secrets\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008039 kubelet[2175]: I0510 00:42:17.007943 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/118e11f2-23bb-48a2-ac61-c73e24b08e9a-cilium-config-path\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008039 kubelet[2175]: I0510 00:42:17.007995 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/118e11f2-23bb-48a2-ac61-c73e24b08e9a-hubble-tls\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008039 kubelet[2175]: I0510 00:42:17.008020 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-cilium-run\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008039 kubelet[2175]: I0510 00:42:17.008041 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-cni-path\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008737 kubelet[2175]: I0510 00:42:17.008060 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/118e11f2-23bb-48a2-ac61-c73e24b08e9a-clustermesh-secrets\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008737 kubelet[2175]: I0510 00:42:17.008083 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-host-proc-sys-net\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008737 kubelet[2175]: I0510 00:42:17.008115 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-hostproc\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008737 kubelet[2175]: I0510 00:42:17.008142 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-bpf-maps\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008737 kubelet[2175]: I0510 00:42:17.008187 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-lib-modules\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008737 kubelet[2175]: I0510 00:42:17.008223 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2ngg\" (UniqueName: \"kubernetes.io/projected/118e11f2-23bb-48a2-ac61-c73e24b08e9a-kube-api-access-k2ngg\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008921 kubelet[2175]: I0510 00:42:17.008272 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-cilium-cgroup\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008921 kubelet[2175]: I0510 00:42:17.008321 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-host-proc-sys-kernel\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.008921 kubelet[2175]: I0510 00:42:17.008356 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/118e11f2-23bb-48a2-ac61-c73e24b08e9a-etc-cni-netd\") pod \"cilium-bz278\" (UID: \"118e11f2-23bb-48a2-ac61-c73e24b08e9a\") " pod="kube-system/cilium-bz278" May 10 00:42:17.234536 kubelet[2175]: E0510 00:42:17.234470 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:17.235104 env[1307]: time="2025-05-10T00:42:17.235040988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz278,Uid:118e11f2-23bb-48a2-ac61-c73e24b08e9a,Namespace:kube-system,Attempt:0,}" May 10 00:42:17.302970 env[1307]: time="2025-05-10T00:42:17.302805786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:42:17.302970 env[1307]: time="2025-05-10T00:42:17.302858065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:42:17.302970 env[1307]: time="2025-05-10T00:42:17.302886199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:42:17.303816 env[1307]: time="2025-05-10T00:42:17.303694887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b pid=4231 runtime=io.containerd.runc.v2 May 10 00:42:17.338802 env[1307]: time="2025-05-10T00:42:17.338754464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bz278,Uid:118e11f2-23bb-48a2-ac61-c73e24b08e9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\"" May 10 00:42:17.339599 kubelet[2175]: E0510 00:42:17.339334 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:17.340841 env[1307]: time="2025-05-10T00:42:17.340814730Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:42:17.352061 env[1307]: time="2025-05-10T00:42:17.352021724Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce4b91a3fb710b5f8c3fdfa52b3e81d43e0a6af03fb5eb3c50fe7fd0c292a06c\"" May 10 00:42:17.352592 env[1307]: time="2025-05-10T00:42:17.352549187Z" level=info msg="StartContainer for \"ce4b91a3fb710b5f8c3fdfa52b3e81d43e0a6af03fb5eb3c50fe7fd0c292a06c\"" May 10 00:42:17.467609 env[1307]: time="2025-05-10T00:42:17.467537967Z" level=info msg="StartContainer for \"ce4b91a3fb710b5f8c3fdfa52b3e81d43e0a6af03fb5eb3c50fe7fd0c292a06c\" returns successfully" May 10 00:42:17.565519 env[1307]: time="2025-05-10T00:42:17.565348000Z" level=info msg="shim disconnected" id=ce4b91a3fb710b5f8c3fdfa52b3e81d43e0a6af03fb5eb3c50fe7fd0c292a06c May 10 00:42:17.565519 env[1307]: time="2025-05-10T00:42:17.565418284Z" level=warning msg="cleaning up after shim disconnected" id=ce4b91a3fb710b5f8c3fdfa52b3e81d43e0a6af03fb5eb3c50fe7fd0c292a06c namespace=k8s.io May 10 00:42:17.565519 env[1307]: time="2025-05-10T00:42:17.565429114Z" level=info msg="cleaning up dead shim" May 10 00:42:17.572347 env[1307]: time="2025-05-10T00:42:17.572311738Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4314 runtime=io.containerd.runc.v2\n" May 10 00:42:17.690063 kubelet[2175]: I0510 00:42:17.690008 2175 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6828a48-e6da-432d-b717-65d3e1532ead" path="/var/lib/kubelet/pods/f6828a48-e6da-432d-b717-65d3e1532ead/volumes" May 10 00:42:17.868673 kubelet[2175]: E0510 00:42:17.868632 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:17.871710 env[1307]: time="2025-05-10T00:42:17.871652003Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:42:18.038622 env[1307]: time="2025-05-10T00:42:18.038548678Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6cb3351e4ee82c0814686525fc891566c482d0135828e2b7613a4d97d7e131e4\"" May 10 00:42:18.039407 env[1307]: time="2025-05-10T00:42:18.039349189Z" level=info msg="StartContainer for \"6cb3351e4ee82c0814686525fc891566c482d0135828e2b7613a4d97d7e131e4\"" May 10 00:42:18.104164 env[1307]: time="2025-05-10T00:42:18.104080273Z" level=info msg="StartContainer for \"6cb3351e4ee82c0814686525fc891566c482d0135828e2b7613a4d97d7e131e4\" returns successfully" May 10 00:42:18.119608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cb3351e4ee82c0814686525fc891566c482d0135828e2b7613a4d97d7e131e4-rootfs.mount: Deactivated successfully. May 10 00:42:18.124027 env[1307]: time="2025-05-10T00:42:18.123967854Z" level=info msg="shim disconnected" id=6cb3351e4ee82c0814686525fc891566c482d0135828e2b7613a4d97d7e131e4 May 10 00:42:18.124146 env[1307]: time="2025-05-10T00:42:18.124026095Z" level=warning msg="cleaning up after shim disconnected" id=6cb3351e4ee82c0814686525fc891566c482d0135828e2b7613a4d97d7e131e4 namespace=k8s.io May 10 00:42:18.124146 env[1307]: time="2025-05-10T00:42:18.124039440Z" level=info msg="cleaning up dead shim" May 10 00:42:18.131956 env[1307]: time="2025-05-10T00:42:18.131894971Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4374 runtime=io.containerd.runc.v2\n" May 10 00:42:18.737381 kubelet[2175]: E0510 00:42:18.737321 2175 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:42:18.873277 kubelet[2175]: E0510 00:42:18.873209 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:18.876004 env[1307]: time="2025-05-10T00:42:18.875963759Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:42:18.894460 env[1307]: time="2025-05-10T00:42:18.894398317Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62606ddc23f5bc6f845e87e68d4c6162336b80a176a4d70a6d7c748281a3cab8\"" May 10 00:42:18.895069 env[1307]: time="2025-05-10T00:42:18.895026752Z" level=info msg="StartContainer for \"62606ddc23f5bc6f845e87e68d4c6162336b80a176a4d70a6d7c748281a3cab8\"" May 10 00:42:18.939860 env[1307]: time="2025-05-10T00:42:18.939802506Z" level=info msg="StartContainer for \"62606ddc23f5bc6f845e87e68d4c6162336b80a176a4d70a6d7c748281a3cab8\" returns successfully" May 10 00:42:18.959828 env[1307]: time="2025-05-10T00:42:18.959746845Z" level=info msg="shim disconnected" id=62606ddc23f5bc6f845e87e68d4c6162336b80a176a4d70a6d7c748281a3cab8 May 10 00:42:18.959828 env[1307]: time="2025-05-10T00:42:18.959808201Z" level=warning msg="cleaning up after shim disconnected" id=62606ddc23f5bc6f845e87e68d4c6162336b80a176a4d70a6d7c748281a3cab8 namespace=k8s.io May 10 00:42:18.959828 env[1307]: time="2025-05-10T00:42:18.959824993Z" level=info msg="cleaning up dead shim" May 10 00:42:18.968322 env[1307]: time="2025-05-10T00:42:18.968261829Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4430 runtime=io.containerd.runc.v2\n" May 10 00:42:19.114711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62606ddc23f5bc6f845e87e68d4c6162336b80a176a4d70a6d7c748281a3cab8-rootfs.mount: Deactivated successfully. May 10 00:42:19.877469 kubelet[2175]: E0510 00:42:19.877427 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:19.879654 env[1307]: time="2025-05-10T00:42:19.879609632Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:42:19.896478 env[1307]: time="2025-05-10T00:42:19.896414458Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86049936814a43a4dd460f763cd4b3df84babfa61c90250475ac78e38dba8122\"" May 10 00:42:19.897022 env[1307]: time="2025-05-10T00:42:19.896979502Z" level=info msg="StartContainer for \"86049936814a43a4dd460f763cd4b3df84babfa61c90250475ac78e38dba8122\"" May 10 00:42:19.943533 env[1307]: time="2025-05-10T00:42:19.943472358Z" level=info msg="StartContainer for \"86049936814a43a4dd460f763cd4b3df84babfa61c90250475ac78e38dba8122\" returns successfully" May 10 00:42:19.967979 env[1307]: time="2025-05-10T00:42:19.967912515Z" level=info msg="shim disconnected" id=86049936814a43a4dd460f763cd4b3df84babfa61c90250475ac78e38dba8122 May 10 00:42:19.967979 env[1307]: time="2025-05-10T00:42:19.967965125Z" level=warning msg="cleaning up after shim disconnected" id=86049936814a43a4dd460f763cd4b3df84babfa61c90250475ac78e38dba8122 namespace=k8s.io May 10 00:42:19.967979 env[1307]: time="2025-05-10T00:42:19.967974914Z" level=info msg="cleaning up dead shim" May 10 00:42:19.975887 env[1307]: time="2025-05-10T00:42:19.975822247Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:42:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4485 runtime=io.containerd.runc.v2\n" May 10 00:42:20.114770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86049936814a43a4dd460f763cd4b3df84babfa61c90250475ac78e38dba8122-rootfs.mount: Deactivated successfully. May 10 00:42:20.881724 kubelet[2175]: E0510 00:42:20.881691 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:20.883976 env[1307]: time="2025-05-10T00:42:20.883934548Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:42:20.907506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257968733.mount: Deactivated successfully. May 10 00:42:20.910819 env[1307]: time="2025-05-10T00:42:20.910770541Z" level=info msg="CreateContainer within sandbox \"e6b96953bd9895bd6b56cd9d7ba8c85d626ce7b9415e1bdfc428ecb4a14d391b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"502a2230547a582262fcc1593b3e888f2cf398a8be873d11565e867e70d028b6\"" May 10 00:42:20.911985 env[1307]: time="2025-05-10T00:42:20.911330005Z" level=info msg="StartContainer for \"502a2230547a582262fcc1593b3e888f2cf398a8be873d11565e867e70d028b6\"" May 10 00:42:20.960711 env[1307]: time="2025-05-10T00:42:20.960631977Z" level=info msg="StartContainer for \"502a2230547a582262fcc1593b3e888f2cf398a8be873d11565e867e70d028b6\" returns successfully" May 10 00:42:21.267400 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:42:21.885531 kubelet[2175]: E0510 00:42:21.885484 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:21.900242 kubelet[2175]: I0510 00:42:21.900181 2175 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bz278" podStartSLOduration=5.900162951 podStartE2EDuration="5.900162951s" podCreationTimestamp="2025-05-10 00:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:42:21.899950557 +0000 UTC m=+88.329532344" watchObservedRunningTime="2025-05-10 00:42:21.900162951 +0000 UTC m=+88.329744728" May 10 00:42:22.687512 kubelet[2175]: E0510 00:42:22.687459 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:23.236432 kubelet[2175]: E0510 00:42:23.236392 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:23.451754 systemd[1]: run-containerd-runc-k8s.io-502a2230547a582262fcc1593b3e888f2cf398a8be873d11565e867e70d028b6-runc.qUz4jI.mount: Deactivated successfully. May 10 00:42:24.101474 systemd-networkd[1086]: lxc_health: Link UP May 10 00:42:24.114511 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:42:24.114244 systemd-networkd[1086]: lxc_health: Gained carrier May 10 00:42:24.687493 kubelet[2175]: E0510 00:42:24.687446 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:25.236007 kubelet[2175]: E0510 00:42:25.235964 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:25.559354 systemd[1]: run-containerd-runc-k8s.io-502a2230547a582262fcc1593b3e888f2cf398a8be873d11565e867e70d028b6-runc.J01y06.mount: Deactivated successfully. May 10 00:42:25.648781 systemd-networkd[1086]: lxc_health: Gained IPv6LL May 10 00:42:25.892172 kubelet[2175]: E0510 00:42:25.892120 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:26.893658 kubelet[2175]: E0510 00:42:26.893615 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:27.660285 systemd[1]: run-containerd-runc-k8s.io-502a2230547a582262fcc1593b3e888f2cf398a8be873d11565e867e70d028b6-runc.Ch5OuI.mount: Deactivated successfully. May 10 00:42:29.688214 kubelet[2175]: E0510 00:42:29.688115 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:42:29.804450 sshd[4065]: pam_unix(sshd:session): session closed for user core May 10 00:42:29.806904 systemd[1]: sshd@25-10.0.0.68:22-10.0.0.1:32922.service: Deactivated successfully. May 10 00:42:29.807908 systemd-logind[1299]: Session 26 logged out. Waiting for processes to exit. May 10 00:42:29.808004 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:42:29.808933 systemd-logind[1299]: Removed session 26.