May 17 00:41:25.034734 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:41:25.034787 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:25.034808 kernel: BIOS-provided physical RAM map: May 17 00:41:25.034818 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:41:25.034849 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:41:25.034863 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:41:25.034874 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:41:25.034885 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:41:25.034899 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:41:25.034908 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:41:25.034918 kernel: NX (Execute Disable) protection: active May 17 00:41:25.034928 kernel: SMBIOS 2.8 present. May 17 00:41:25.034939 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:41:25.034950 kernel: Hypervisor detected: KVM May 17 00:41:25.034964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:41:25.034980 kernel: kvm-clock: cpu 0, msr 5b19a001, primary cpu clock May 17 00:41:25.034992 kernel: kvm-clock: using sched offset of 3680581531 cycles May 17 00:41:25.035004 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:41:25.035022 kernel: tsc: Detected 2494.140 MHz processor May 17 00:41:25.035036 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:41:25.035050 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:41:25.035063 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:41:25.035077 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:41:25.035093 kernel: ACPI: Early table checksum verification disabled May 17 00:41:25.035105 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:41:25.035118 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035131 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035143 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035154 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:41:25.035166 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035178 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035189 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035205 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:41:25.035217 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:41:25.035228 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:41:25.035239 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:41:25.035250 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:41:25.035262 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:41:25.035273 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:41:25.035284 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:41:25.035305 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:41:25.035318 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:41:25.035329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:41:25.035341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:41:25.035354 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:41:25.035367 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:41:25.035383 kernel: Zone ranges: May 17 00:41:25.035395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:41:25.035406 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:41:25.035419 kernel: Normal empty May 17 00:41:25.035431 kernel: Movable zone start for each node May 17 00:41:25.035442 kernel: Early memory node ranges May 17 00:41:25.035454 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:41:25.035466 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:41:25.035478 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:41:25.035494 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:41:25.035511 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:41:25.035524 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:41:25.035536 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:41:25.035548 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:41:25.035560 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:41:25.035573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:41:25.035585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:41:25.035596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:41:25.035612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:41:25.035627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:41:25.035665 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:41:25.035676 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:41:25.035688 kernel: TSC deadline timer available May 17 00:41:25.035700 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:41:25.035711 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:41:25.035723 kernel: Booting paravirtualized kernel on KVM May 17 00:41:25.035735 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:41:25.035751 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:41:25.035763 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:41:25.035774 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:41:25.035786 kernel: pcpu-alloc: [0] 0 1 May 17 00:41:25.035798 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 May 17 00:41:25.035810 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:41:25.035822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:41:25.035833 kernel: Policy zone: DMA32 May 17 00:41:25.035847 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:25.035864 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:41:25.035875 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:41:25.035887 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:41:25.035899 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:41:25.035911 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) May 17 00:41:25.035924 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:41:25.035936 kernel: Kernel/User page tables isolation: enabled May 17 00:41:25.035949 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:41:25.035989 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:41:25.036001 kernel: rcu: Hierarchical RCU implementation. May 17 00:41:25.036026 kernel: rcu: RCU event tracing is enabled. May 17 00:41:25.036038 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:41:25.036050 kernel: Rude variant of Tasks RCU enabled. May 17 00:41:25.036061 kernel: Tracing variant of Tasks RCU enabled. May 17 00:41:25.036073 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:41:25.036085 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:41:25.036097 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:41:25.036119 kernel: random: crng init done May 17 00:41:25.036146 kernel: Console: colour VGA+ 80x25 May 17 00:41:25.036157 kernel: printk: console [tty0] enabled May 17 00:41:25.036170 kernel: printk: console [ttyS0] enabled May 17 00:41:25.036182 kernel: ACPI: Core revision 20210730 May 17 00:41:25.036194 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:41:25.036206 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:41:25.036217 kernel: x2apic enabled May 17 00:41:25.036229 kernel: Switched APIC routing to physical x2apic. May 17 00:41:25.036240 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:41:25.036256 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 17 00:41:25.036268 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) May 17 00:41:25.036286 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:41:25.036298 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:41:25.036309 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:41:25.036322 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:41:25.036333 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:41:25.036345 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:41:25.036363 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:41:25.036386 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:41:25.036402 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:41:25.036417 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:41:25.036430 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:41:25.036442 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:41:25.036454 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:41:25.036467 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:41:25.036479 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:41:25.036491 kernel: Freeing SMP alternatives memory: 32K May 17 00:41:25.036508 kernel: pid_max: default: 32768 minimum: 301 May 17 00:41:25.036519 kernel: LSM: Security Framework initializing May 17 00:41:25.036532 kernel: SELinux: Initializing. May 17 00:41:25.036544 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:41:25.036557 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:41:25.036569 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:41:25.036581 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:41:25.036598 kernel: signal: max sigframe size: 1776 May 17 00:41:25.036610 kernel: rcu: Hierarchical SRCU implementation. May 17 00:41:25.036622 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:41:25.039439 kernel: smp: Bringing up secondary CPUs ... May 17 00:41:25.039492 kernel: x86: Booting SMP configuration: May 17 00:41:25.039506 kernel: .... node #0, CPUs: #1 May 17 00:41:25.039520 kernel: kvm-clock: cpu 1, msr 5b19a041, secondary cpu clock May 17 00:41:25.039533 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 May 17 00:41:25.039546 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:41:25.039572 kernel: smpboot: Max logical packages: 1 May 17 00:41:25.039586 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) May 17 00:41:25.039599 kernel: devtmpfs: initialized May 17 00:41:25.039612 kernel: x86/mm: Memory block size: 128MB May 17 00:41:25.039625 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:41:25.039649 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:41:25.039663 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:41:25.039676 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:41:25.039689 kernel: audit: initializing netlink subsys (disabled) May 17 00:41:25.039706 kernel: audit: type=2000 audit(1747442484.271:1): state=initialized audit_enabled=0 res=1 May 17 00:41:25.039744 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:41:25.039757 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:41:25.044796 kernel: cpuidle: using governor menu May 17 00:41:25.044812 kernel: ACPI: bus type PCI registered May 17 00:41:25.044831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:41:25.044851 kernel: dca service started, version 1.12.1 May 17 00:41:25.044865 kernel: PCI: Using configuration type 1 for base access May 17 00:41:25.044878 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:41:25.044901 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:41:25.044914 kernel: ACPI: Added _OSI(Module Device) May 17 00:41:25.044927 kernel: ACPI: Added _OSI(Processor Device) May 17 00:41:25.044940 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:41:25.044952 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:41:25.044965 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:41:25.044978 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:41:25.044991 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:41:25.045003 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:41:25.045019 kernel: ACPI: Interpreter enabled May 17 00:41:25.045032 kernel: ACPI: PM: (supports S0 S5) May 17 00:41:25.045060 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:41:25.045073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:41:25.045085 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:41:25.045097 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:41:25.045387 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:41:25.045553 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:41:25.045577 kernel: acpiphp: Slot [3] registered May 17 00:41:25.045590 kernel: acpiphp: Slot [4] registered May 17 00:41:25.045603 kernel: acpiphp: Slot [5] registered May 17 00:41:25.045629 kernel: acpiphp: Slot [6] registered May 17 00:41:25.045669 kernel: acpiphp: Slot [7] registered May 17 00:41:25.045681 kernel: acpiphp: Slot [8] registered May 17 00:41:25.045694 kernel: acpiphp: Slot [9] registered May 17 00:41:25.045707 kernel: acpiphp: Slot [10] registered May 17 00:41:25.045719 kernel: acpiphp: Slot [11] registered May 17 00:41:25.045736 kernel: acpiphp: Slot [12] registered May 17 00:41:25.045749 kernel: acpiphp: Slot [13] registered May 17 00:41:25.045761 kernel: acpiphp: Slot [14] registered May 17 00:41:25.045773 kernel: acpiphp: Slot [15] registered May 17 00:41:25.045785 kernel: acpiphp: Slot [16] registered May 17 00:41:25.045797 kernel: acpiphp: Slot [17] registered May 17 00:41:25.045808 kernel: acpiphp: Slot [18] registered May 17 00:41:25.045821 kernel: acpiphp: Slot [19] registered May 17 00:41:25.045833 kernel: acpiphp: Slot [20] registered May 17 00:41:25.045850 kernel: acpiphp: Slot [21] registered May 17 00:41:25.045863 kernel: acpiphp: Slot [22] registered May 17 00:41:25.045875 kernel: acpiphp: Slot [23] registered May 17 00:41:25.045887 kernel: acpiphp: Slot [24] registered May 17 00:41:25.045898 kernel: acpiphp: Slot [25] registered May 17 00:41:25.045910 kernel: acpiphp: Slot [26] registered May 17 00:41:25.045922 kernel: acpiphp: Slot [27] registered May 17 00:41:25.045934 kernel: acpiphp: Slot [28] registered May 17 00:41:25.045945 kernel: acpiphp: Slot [29] registered May 17 00:41:25.045957 kernel: acpiphp: Slot [30] registered May 17 00:41:25.045973 kernel: acpiphp: Slot [31] registered May 17 00:41:25.045984 kernel: PCI host bridge to bus 0000:00 May 17 00:41:25.046156 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:41:25.046310 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:41:25.046474 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:41:25.046603 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:41:25.046754 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:41:25.046890 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:41:25.047084 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:41:25.047257 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:41:25.047474 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:41:25.047625 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:41:25.047784 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:41:25.047943 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:41:25.048080 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:41:25.048220 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:41:25.048399 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:41:25.048542 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:41:25.048716 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:41:25.048858 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:41:25.049015 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:41:25.049173 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:41:25.049342 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:41:25.049500 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:41:25.061794 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:41:25.062031 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:41:25.062211 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:41:25.062404 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:25.062565 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:41:25.062770 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:41:25.062931 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:41:25.063105 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:41:25.063251 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:41:25.063411 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:41:25.063571 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:41:25.063840 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:41:25.063986 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:41:25.064127 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:41:25.064268 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:41:25.064432 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:41:25.064583 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:41:25.064757 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:41:25.064907 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:41:25.065066 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:41:25.065209 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:41:25.065369 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:41:25.065506 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:41:25.065754 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:41:25.065898 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:41:25.066036 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:41:25.066051 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:41:25.066064 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:41:25.066087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:41:25.066100 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:41:25.066120 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:41:25.066133 kernel: iommu: Default domain type: Translated May 17 00:41:25.066145 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:41:25.066300 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:41:25.066459 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:41:25.066588 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:41:25.066607 kernel: vgaarb: loaded May 17 00:41:25.066638 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:41:25.066667 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:41:25.067763 kernel: PTP clock support registered May 17 00:41:25.067778 kernel: PCI: Using ACPI for IRQ routing May 17 00:41:25.067790 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:41:25.067803 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:41:25.067815 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:41:25.067828 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:41:25.067840 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:41:25.067852 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:41:25.067865 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:41:25.067885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:41:25.067899 kernel: pnp: PnP ACPI init May 17 00:41:25.067910 kernel: pnp: PnP ACPI: found 4 devices May 17 00:41:25.067924 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:41:25.067936 kernel: NET: Registered PF_INET protocol family May 17 00:41:25.067949 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:41:25.067960 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:41:25.067973 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:41:25.067989 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:41:25.068000 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:41:25.068011 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:41:25.068023 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:41:25.068034 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:41:25.068046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:41:25.068057 kernel: NET: Registered PF_XDP protocol family May 17 00:41:25.068262 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:41:25.068386 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:41:25.068514 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:41:25.069751 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:41:25.069946 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:41:25.070116 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:41:25.070263 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:41:25.070422 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:41:25.070439 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:41:25.070594 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 30290 usecs May 17 00:41:25.070624 kernel: PCI: CLS 0 bytes, default 64 May 17 00:41:25.070653 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:41:25.070670 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns May 17 00:41:25.070686 kernel: Initialise system trusted keyrings May 17 00:41:25.070701 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:41:25.070715 kernel: Key type asymmetric registered May 17 00:41:25.070729 kernel: Asymmetric key parser 'x509' registered May 17 00:41:25.070744 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:41:25.070758 kernel: io scheduler mq-deadline registered May 17 00:41:25.070777 kernel: io scheduler kyber registered May 17 00:41:25.070790 kernel: io scheduler bfq registered May 17 00:41:25.070804 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:41:25.070818 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:41:25.070831 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:41:25.070845 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:41:25.070859 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:41:25.070872 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:41:25.070885 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:41:25.070903 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:41:25.070915 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:41:25.070929 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:41:25.071134 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:41:25.071279 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:41:25.071399 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:41:24 UTC (1747442484) May 17 00:41:25.071532 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:41:25.071554 kernel: intel_pstate: CPU model not supported May 17 00:41:25.071566 kernel: NET: Registered PF_INET6 protocol family May 17 00:41:25.071579 kernel: Segment Routing with IPv6 May 17 00:41:25.071591 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:41:25.071603 kernel: NET: Registered PF_PACKET protocol family May 17 00:41:25.071616 kernel: Key type dns_resolver registered May 17 00:41:25.071628 kernel: IPI shorthand broadcast: enabled May 17 00:41:25.071640 kernel: sched_clock: Marking stable (648001897, 90366945)->(866307377, -127938535) May 17 00:41:25.071653 kernel: registered taskstats version 1 May 17 00:41:25.076710 kernel: Loading compiled-in X.509 certificates May 17 00:41:25.076766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:41:25.076780 kernel: Key type .fscrypt registered May 17 00:41:25.076793 kernel: Key type fscrypt-provisioning registered May 17 00:41:25.076806 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:41:25.076819 kernel: ima: Allocated hash algorithm: sha1 May 17 00:41:25.076834 kernel: ima: No architecture policies found May 17 00:41:25.076849 kernel: clk: Disabling unused clocks May 17 00:41:25.076861 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:41:25.076880 kernel: Write protecting the kernel read-only data: 28672k May 17 00:41:25.076894 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:41:25.076910 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:41:25.076925 kernel: Run /init as init process May 17 00:41:25.076941 kernel: with arguments: May 17 00:41:25.076956 kernel: /init May 17 00:41:25.076997 kernel: with environment: May 17 00:41:25.077015 kernel: HOME=/ May 17 00:41:25.077027 kernel: TERM=linux May 17 00:41:25.077040 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:41:25.077067 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:25.077127 systemd[1]: Detected virtualization kvm. May 17 00:41:25.077142 systemd[1]: Detected architecture x86-64. May 17 00:41:25.077165 systemd[1]: Running in initrd. May 17 00:41:25.077179 systemd[1]: No hostname configured, using default hostname. May 17 00:41:25.077201 systemd[1]: Hostname set to . May 17 00:41:25.077221 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:25.077234 systemd[1]: Queued start job for default target initrd.target. May 17 00:41:25.077247 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:25.077260 systemd[1]: Reached target cryptsetup.target. May 17 00:41:25.077273 systemd[1]: Reached target paths.target. May 17 00:41:25.077286 systemd[1]: Reached target slices.target. May 17 00:41:25.077300 systemd[1]: Reached target swap.target. May 17 00:41:25.077313 systemd[1]: Reached target timers.target. May 17 00:41:25.077330 systemd[1]: Listening on iscsid.socket. May 17 00:41:25.077343 systemd[1]: Listening on iscsiuio.socket. May 17 00:41:25.077357 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:41:25.077370 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:41:25.077385 systemd[1]: Listening on systemd-journald.socket. May 17 00:41:25.077398 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:25.077412 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:25.077429 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:25.077442 systemd[1]: Reached target sockets.target. May 17 00:41:25.077460 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:25.077473 systemd[1]: Finished network-cleanup.service. May 17 00:41:25.077491 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:41:25.077505 systemd[1]: Starting systemd-journald.service... May 17 00:41:25.077518 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:25.077536 systemd[1]: Starting systemd-resolved.service... May 17 00:41:25.077550 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:41:25.077563 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:25.077576 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:41:25.077591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:25.077629 systemd-journald[183]: Journal started May 17 00:41:25.077752 systemd-journald[183]: Runtime Journal (/run/log/journal/7ecf2bd8b3dc4a5980bbbfec521380c4) is 4.9M, max 39.5M, 34.5M free. May 17 00:41:25.043140 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:41:25.101318 systemd[1]: Started systemd-journald.service. May 17 00:41:25.101346 kernel: audit: type=1130 audit(1747442485.098:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.083580 systemd-resolved[185]: Positive Trust Anchors: May 17 00:41:25.083591 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:25.083648 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:25.130159 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:41:25.130199 kernel: audit: type=1130 audit(1747442485.103:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.130222 kernel: audit: type=1130 audit(1747442485.103:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.130240 kernel: audit: type=1130 audit(1747442485.104:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.130259 kernel: Bridge firewalling registered May 17 00:41:25.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.088086 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:41:25.099432 systemd[1]: Started systemd-resolved.service. May 17 00:41:25.104133 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:41:25.104754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:25.105260 systemd[1]: Reached target nss-lookup.target. May 17 00:41:25.109940 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:41:25.117186 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:41:25.144696 kernel: audit: type=1130 audit(1747442485.136:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.136371 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:41:25.138108 systemd[1]: Starting dracut-cmdline.service... May 17 00:41:25.147121 kernel: SCSI subsystem initialized May 17 00:41:25.155103 dracut-cmdline[201]: dracut-dracut-053 May 17 00:41:25.159388 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:41:25.163693 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:41:25.163792 kernel: device-mapper: uevent: version 1.0.3 May 17 00:41:25.164831 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:41:25.170524 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:41:25.171547 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:25.182686 kernel: audit: type=1130 audit(1747442485.171:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.183091 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:25.197502 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:25.201763 kernel: audit: type=1130 audit(1747442485.197:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.275686 kernel: Loading iSCSI transport class v2.0-870. May 17 00:41:25.301715 kernel: iscsi: registered transport (tcp) May 17 00:41:25.334713 kernel: iscsi: registered transport (qla4xxx) May 17 00:41:25.334868 kernel: QLogic iSCSI HBA Driver May 17 00:41:25.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.401818 kernel: audit: type=1130 audit(1747442485.397:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.397592 systemd[1]: Finished dracut-cmdline.service. May 17 00:41:25.399825 systemd[1]: Starting dracut-pre-udev.service... May 17 00:41:25.460718 kernel: raid6: avx2x4 gen() 15962 MB/s May 17 00:41:25.477785 kernel: raid6: avx2x4 xor() 8544 MB/s May 17 00:41:25.494704 kernel: raid6: avx2x2 gen() 15096 MB/s May 17 00:41:25.511727 kernel: raid6: avx2x2 xor() 19798 MB/s May 17 00:41:25.528703 kernel: raid6: avx2x1 gen() 11358 MB/s May 17 00:41:25.545820 kernel: raid6: avx2x1 xor() 11874 MB/s May 17 00:41:25.563075 kernel: raid6: sse2x4 gen() 9730 MB/s May 17 00:41:25.579705 kernel: raid6: sse2x4 xor() 4385 MB/s May 17 00:41:25.596710 kernel: raid6: sse2x2 gen() 9770 MB/s May 17 00:41:25.613745 kernel: raid6: sse2x2 xor() 6556 MB/s May 17 00:41:25.630723 kernel: raid6: sse2x1 gen() 8963 MB/s May 17 00:41:25.648447 kernel: raid6: sse2x1 xor() 5202 MB/s May 17 00:41:25.648535 kernel: raid6: using algorithm avx2x4 gen() 15962 MB/s May 17 00:41:25.648555 kernel: raid6: .... xor() 8544 MB/s, rmw enabled May 17 00:41:25.649245 kernel: raid6: using avx2x2 recovery algorithm May 17 00:41:25.667721 kernel: xor: automatically using best checksumming function avx May 17 00:41:25.787819 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:41:25.802846 systemd[1]: Finished dracut-pre-udev.service. May 17 00:41:25.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.804748 systemd[1]: Starting systemd-udevd.service... May 17 00:41:25.808331 kernel: audit: type=1130 audit(1747442485.802:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.803000 audit: BPF prog-id=7 op=LOAD May 17 00:41:25.803000 audit: BPF prog-id=8 op=LOAD May 17 00:41:25.826449 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:41:25.836103 systemd[1]: Started systemd-udevd.service. May 17 00:41:25.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.839388 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:41:25.860083 dracut-pre-trigger[387]: rd.md=0: removing MD RAID activation May 17 00:41:25.911422 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:41:25.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.912977 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:25.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:25.979901 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:26.063677 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:41:26.133587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:41:26.133659 kernel: GPT:9289727 != 125829119 May 17 00:41:26.133681 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:41:26.133710 kernel: GPT:9289727 != 125829119 May 17 00:41:26.133731 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:41:26.133751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:26.133770 kernel: scsi host0: Virtio SCSI HBA May 17 00:41:26.134164 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:41:26.134187 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:41:26.134206 kernel: AES CTR mode by8 optimization enabled May 17 00:41:26.136957 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) May 17 00:41:26.195684 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (435) May 17 00:41:26.198678 kernel: libata version 3.00 loaded. May 17 00:41:26.202464 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:41:26.272949 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:41:26.273288 kernel: scsi host1: ata_piix May 17 00:41:26.273543 kernel: scsi host2: ata_piix May 17 00:41:26.273784 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:41:26.273808 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:41:26.273828 kernel: ACPI: bus type USB registered May 17 00:41:26.273848 kernel: usbcore: registered new interface driver usbfs May 17 00:41:26.273866 kernel: usbcore: registered new interface driver hub May 17 00:41:26.273902 kernel: usbcore: registered new device driver usb May 17 00:41:26.273308 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:41:26.277920 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:41:26.285632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:41:26.291585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:26.295502 systemd[1]: Starting disk-uuid.service... May 17 00:41:26.303889 disk-uuid[503]: Primary Header is updated. May 17 00:41:26.303889 disk-uuid[503]: Secondary Entries is updated. May 17 00:41:26.303889 disk-uuid[503]: Secondary Header is updated. May 17 00:41:26.315726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:26.330235 kernel: GPT:disk_guids don't match. May 17 00:41:26.330355 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:41:26.330377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:26.358700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:26.400671 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver May 17 00:41:26.403696 kernel: ehci-pci: EHCI PCI platform driver May 17 00:41:26.430684 kernel: uhci_hcd: USB Universal Host Controller Interface driver May 17 00:41:26.514725 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:41:26.524442 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:41:26.524764 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:41:26.524952 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 May 17 00:41:26.525135 kernel: hub 1-0:1.0: USB hub found May 17 00:41:26.525370 kernel: hub 1-0:1.0: 2 ports detected May 17 00:41:27.335426 disk-uuid[504]: The operation has completed successfully. May 17 00:41:27.336305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:41:27.408623 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:41:27.409911 systemd[1]: Finished disk-uuid.service. May 17 00:41:27.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.413102 systemd[1]: Starting verity-setup.service... May 17 00:41:27.444717 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:41:27.508214 systemd[1]: Found device dev-mapper-usr.device. May 17 00:41:27.510944 systemd[1]: Mounting sysusr-usr.mount... May 17 00:41:27.513350 systemd[1]: Finished verity-setup.service. May 17 00:41:27.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.613701 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:41:27.615505 systemd[1]: Mounted sysusr-usr.mount. May 17 00:41:27.617011 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:41:27.619684 systemd[1]: Starting ignition-setup.service... May 17 00:41:27.623083 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:41:27.637946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:27.638041 kernel: BTRFS info (device vda6): using free space tree May 17 00:41:27.638055 kernel: BTRFS info (device vda6): has skinny extents May 17 00:41:27.660987 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:41:27.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.671803 systemd[1]: Finished ignition-setup.service. May 17 00:41:27.675031 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:41:27.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.814391 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:41:27.815000 audit: BPF prog-id=9 op=LOAD May 17 00:41:27.817128 systemd[1]: Starting systemd-networkd.service... May 17 00:41:27.844401 ignition[605]: Ignition 2.14.0 May 17 00:41:27.844423 ignition[605]: Stage: fetch-offline May 17 00:41:27.844554 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:27.844660 ignition[605]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:27.850838 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:27.850990 ignition[605]: parsed url from cmdline: "" May 17 00:41:27.850995 ignition[605]: no config URL provided May 17 00:41:27.851001 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:27.851011 ignition[605]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:27.852408 systemd-networkd[687]: lo: Link UP May 17 00:41:27.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.851019 ignition[605]: failed to fetch config: resource requires networking May 17 00:41:27.852414 systemd-networkd[687]: lo: Gained carrier May 17 00:41:27.858513 ignition[605]: Ignition finished successfully May 17 00:41:27.853280 systemd-networkd[687]: Enumeration completed May 17 00:41:27.853468 systemd[1]: Started systemd-networkd.service. May 17 00:41:27.854230 systemd-networkd[687]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:41:27.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.855181 systemd[1]: Reached target network.target. May 17 00:41:27.855243 systemd-networkd[687]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:41:27.857827 systemd[1]: Starting iscsiuio.service... May 17 00:41:27.863054 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:41:27.865187 systemd-networkd[687]: eth1: Link UP May 17 00:41:27.865195 systemd-networkd[687]: eth1: Gained carrier May 17 00:41:27.865569 systemd[1]: Starting ignition-fetch.service... May 17 00:41:27.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.869147 systemd-networkd[687]: eth0: Link UP May 17 00:41:27.869152 systemd-networkd[687]: eth0: Gained carrier May 17 00:41:27.887162 systemd[1]: Started iscsiuio.service. May 17 00:41:27.887807 systemd-networkd[687]: eth0: DHCPv4 address 64.23.137.34/20, gateway 64.23.128.1 acquired from 169.254.169.253 May 17 00:41:27.890022 systemd[1]: Starting iscsid.service... May 17 00:41:27.894824 ignition[691]: Ignition 2.14.0 May 17 00:41:27.895873 systemd-networkd[687]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 May 17 00:41:27.901006 iscsid[697]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:27.901006 iscsid[697]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:41:27.901006 iscsid[697]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:41:27.901006 iscsid[697]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:41:27.901006 iscsid[697]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:41:27.901006 iscsid[697]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:41:27.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.894837 ignition[691]: Stage: fetch May 17 00:41:27.899348 systemd[1]: Started iscsid.service. May 17 00:41:27.895018 ignition[691]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:27.901222 systemd[1]: Starting dracut-initqueue.service... May 17 00:41:27.895051 ignition[691]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:27.902141 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:27.902291 ignition[691]: parsed url from cmdline: "" May 17 00:41:27.902298 ignition[691]: no config URL provided May 17 00:41:27.902310 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:41:27.902323 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 17 00:41:27.902380 ignition[691]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:41:27.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.923658 systemd[1]: Finished dracut-initqueue.service. May 17 00:41:27.924410 systemd[1]: Reached target remote-fs-pre.target. May 17 00:41:27.924893 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:27.925329 systemd[1]: Reached target remote-fs.target. May 17 00:41:27.927436 systemd[1]: Starting dracut-pre-mount.service... May 17 00:41:27.938804 ignition[691]: GET result: OK May 17 00:41:27.939937 ignition[691]: parsing config with SHA512: 4eac51762feda5e60c6ef3e8cf46980a6d3240dd8c01469ebb2459d9d874269835bc20c7d717db13b07f79859fcf1554d4ff67f988277288dcb0e3cb5cc6d549 May 17 00:41:27.942030 systemd[1]: Finished dracut-pre-mount.service. May 17 00:41:27.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.955026 unknown[691]: fetched base config from "system" May 17 00:41:27.955046 unknown[691]: fetched base config from "system" May 17 00:41:27.955765 ignition[691]: fetch: fetch complete May 17 00:41:27.955058 unknown[691]: fetched user config from "digitalocean" May 17 00:41:27.955772 ignition[691]: fetch: fetch passed May 17 00:41:27.955837 ignition[691]: Ignition finished successfully May 17 00:41:27.960685 systemd[1]: Finished ignition-fetch.service. May 17 00:41:27.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.962803 systemd[1]: Starting ignition-kargs.service... May 17 00:41:27.979313 ignition[712]: Ignition 2.14.0 May 17 00:41:27.979329 ignition[712]: Stage: kargs May 17 00:41:27.979546 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:27.979577 ignition[712]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:27.982491 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:27.985302 ignition[712]: kargs: kargs passed May 17 00:41:27.985418 ignition[712]: Ignition finished successfully May 17 00:41:27.986841 systemd[1]: Finished ignition-kargs.service. May 17 00:41:27.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:27.988751 systemd[1]: Starting ignition-disks.service... May 17 00:41:28.005063 ignition[718]: Ignition 2.14.0 May 17 00:41:28.005075 ignition[718]: Stage: disks May 17 00:41:28.005219 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.005240 ignition[718]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.007666 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.009930 ignition[718]: disks: disks passed May 17 00:41:28.010031 ignition[718]: Ignition finished successfully May 17 00:41:28.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.011177 systemd[1]: Finished ignition-disks.service. May 17 00:41:28.011886 systemd[1]: Reached target initrd-root-device.target. May 17 00:41:28.012358 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:28.013080 systemd[1]: Reached target local-fs.target. May 17 00:41:28.013743 systemd[1]: Reached target sysinit.target. May 17 00:41:28.014285 systemd[1]: Reached target basic.target. May 17 00:41:28.016318 systemd[1]: Starting systemd-fsck-root.service... May 17 00:41:28.036592 systemd-fsck[726]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:41:28.041293 systemd[1]: Finished systemd-fsck-root.service. May 17 00:41:28.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.043719 systemd[1]: Mounting sysroot.mount... May 17 00:41:28.057702 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:41:28.057889 systemd[1]: Mounted sysroot.mount. May 17 00:41:28.058541 systemd[1]: Reached target initrd-root-fs.target. May 17 00:41:28.061314 systemd[1]: Mounting sysroot-usr.mount... May 17 00:41:28.063774 systemd[1]: Starting flatcar-digitalocean-network.service... May 17 00:41:28.066935 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:41:28.067694 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:41:28.067764 systemd[1]: Reached target ignition-diskful.target. May 17 00:41:28.072928 systemd[1]: Mounted sysroot-usr.mount. May 17 00:41:28.075984 systemd[1]: Starting initrd-setup-root.service... May 17 00:41:28.085934 initrd-setup-root[738]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:41:28.103358 initrd-setup-root[746]: cut: /sysroot/etc/group: No such file or directory May 17 00:41:28.117627 initrd-setup-root[756]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:41:28.136265 initrd-setup-root[766]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:41:28.209674 coreos-metadata[733]: May 17 00:41:28.206 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:28.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.223584 systemd[1]: Finished initrd-setup-root.service. May 17 00:41:28.225168 systemd[1]: Starting ignition-mount.service... May 17 00:41:28.226581 systemd[1]: Starting sysroot-boot.service... May 17 00:41:28.239408 coreos-metadata[733]: May 17 00:41:28.237 INFO Fetch successful May 17 00:41:28.240672 coreos-metadata[733]: May 17 00:41:28.240 INFO wrote hostname ci-3510.3.7-n-b5ee3a085c to /sysroot/etc/hostname May 17 00:41:28.242051 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:41:28.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.245747 coreos-metadata[732]: May 17 00:41:28.245 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:28.253241 bash[784]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:41:28.260268 coreos-metadata[732]: May 17 00:41:28.260 INFO Fetch successful May 17 00:41:28.266914 ignition[785]: INFO : Ignition 2.14.0 May 17 00:41:28.266914 ignition[785]: INFO : Stage: mount May 17 00:41:28.268325 ignition[785]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.268325 ignition[785]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.270054 ignition[785]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.271292 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:41:28.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.273398 ignition[785]: INFO : mount: mount passed May 17 00:41:28.273398 ignition[785]: INFO : Ignition finished successfully May 17 00:41:28.271463 systemd[1]: Finished flatcar-digitalocean-network.service. May 17 00:41:28.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.275278 systemd[1]: Finished ignition-mount.service. May 17 00:41:28.290778 systemd[1]: Finished sysroot-boot.service. May 17 00:41:28.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:28.535223 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:41:28.548118 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (794) May 17 00:41:28.550882 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:41:28.550974 kernel: BTRFS info (device vda6): using free space tree May 17 00:41:28.550992 kernel: BTRFS info (device vda6): has skinny extents May 17 00:41:28.566924 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:41:28.570029 systemd[1]: Starting ignition-files.service... May 17 00:41:28.594755 ignition[814]: INFO : Ignition 2.14.0 May 17 00:41:28.594755 ignition[814]: INFO : Stage: files May 17 00:41:28.596352 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:28.596352 ignition[814]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:28.598186 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:28.601699 ignition[814]: DEBUG : files: compiled without relabeling support, skipping May 17 00:41:28.602875 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:41:28.603673 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:41:28.607609 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:41:28.608679 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:41:28.610712 unknown[814]: wrote ssh authorized keys file for user: core May 17 00:41:28.611703 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:41:28.613338 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:41:28.614289 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 17 00:41:28.648192 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:41:28.921348 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 17 00:41:28.922745 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:41:28.922745 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:41:29.247967 systemd-networkd[687]: eth0: Gained IPv6LL May 17 00:41:29.401721 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:41:29.481812 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:41:29.482862 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:41:29.489723 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:41:29.568066 systemd-networkd[687]: eth1: Gained IPv6LL May 17 00:41:30.110286 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:41:30.502417 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:41:30.503504 ignition[814]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:30.504210 ignition[814]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:41:30.504769 ignition[814]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 17 00:41:30.505846 ignition[814]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:30.507547 ignition[814]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:41:30.507547 ignition[814]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 17 00:41:30.510199 ignition[814]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:30.510199 ignition[814]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:41:30.510199 ignition[814]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:41:30.510199 ignition[814]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:41:30.515478 ignition[814]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:30.516228 ignition[814]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:41:30.516228 ignition[814]: INFO : files: files passed May 17 00:41:30.516228 ignition[814]: INFO : Ignition finished successfully May 17 00:41:30.519157 systemd[1]: Finished ignition-files.service. May 17 00:41:30.525195 kernel: kauditd_printk_skb: 27 callbacks suppressed May 17 00:41:30.525294 kernel: audit: type=1130 audit(1747442490.519:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.521086 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:41:30.525909 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:41:30.527889 systemd[1]: Starting ignition-quench.service... May 17 00:41:30.533072 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:41:30.539299 kernel: audit: type=1130 audit(1747442490.533:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.539345 kernel: audit: type=1131 audit(1747442490.533:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.533236 systemd[1]: Finished ignition-quench.service. May 17 00:41:30.542392 kernel: audit: type=1130 audit(1747442490.538:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.542499 initrd-setup-root-after-ignition[839]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:41:30.537683 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:41:30.539890 systemd[1]: Reached target ignition-complete.target. May 17 00:41:30.543976 systemd[1]: Starting initrd-parse-etc.service... May 17 00:41:30.567092 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:41:30.567236 systemd[1]: Finished initrd-parse-etc.service. May 17 00:41:30.568409 systemd[1]: Reached target initrd-fs.target. May 17 00:41:30.575116 kernel: audit: type=1130 audit(1747442490.567:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.575170 kernel: audit: type=1131 audit(1747442490.567:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.574431 systemd[1]: Reached target initrd.target. May 17 00:41:30.575365 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:41:30.577127 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:41:30.594974 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:41:30.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.598937 systemd[1]: Starting initrd-cleanup.service... May 17 00:41:30.600383 kernel: audit: type=1130 audit(1747442490.594:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.612268 systemd[1]: Stopped target nss-lookup.target. May 17 00:41:30.613483 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:41:30.614562 systemd[1]: Stopped target timers.target. May 17 00:41:30.615537 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:41:30.616365 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:41:30.624039 systemd[1]: Stopped target initrd.target. May 17 00:41:30.628059 kernel: audit: type=1131 audit(1747442490.623:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.627724 systemd[1]: Stopped target basic.target. May 17 00:41:30.628452 systemd[1]: Stopped target ignition-complete.target. May 17 00:41:30.629401 systemd[1]: Stopped target ignition-diskful.target. May 17 00:41:30.630317 systemd[1]: Stopped target initrd-root-device.target. May 17 00:41:30.631196 systemd[1]: Stopped target remote-fs.target. May 17 00:41:30.632096 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:41:30.632867 systemd[1]: Stopped target sysinit.target. May 17 00:41:30.633453 systemd[1]: Stopped target local-fs.target. May 17 00:41:30.634262 systemd[1]: Stopped target local-fs-pre.target. May 17 00:41:30.634999 systemd[1]: Stopped target swap.target. May 17 00:41:30.635842 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:41:30.639421 kernel: audit: type=1131 audit(1747442490.635:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.636033 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:41:30.636627 systemd[1]: Stopped target cryptsetup.target. May 17 00:41:30.643154 kernel: audit: type=1131 audit(1747442490.639:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.639757 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:41:30.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.639918 systemd[1]: Stopped dracut-initqueue.service. May 17 00:41:30.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.640740 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:41:30.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.640878 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:41:30.643706 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:41:30.643838 systemd[1]: Stopped ignition-files.service. May 17 00:41:30.647690 iscsid[697]: iscsid shutting down. May 17 00:41:30.644394 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:41:30.644531 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:41:30.646337 systemd[1]: Stopping ignition-mount.service... May 17 00:41:30.651671 systemd[1]: Stopping iscsid.service... May 17 00:41:30.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.653600 systemd[1]: Stopping sysroot-boot.service... May 17 00:41:30.654134 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:41:30.654347 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:41:30.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.673446 ignition[852]: INFO : Ignition 2.14.0 May 17 00:41:30.673446 ignition[852]: INFO : Stage: umount May 17 00:41:30.673446 ignition[852]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:41:30.673446 ignition[852]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c May 17 00:41:30.654968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:41:30.655109 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:41:30.676916 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:41:30.657992 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:41:30.679433 ignition[852]: INFO : umount: umount passed May 17 00:41:30.679433 ignition[852]: INFO : Ignition finished successfully May 17 00:41:30.658201 systemd[1]: Stopped iscsid.service. May 17 00:41:30.663161 systemd[1]: Stopping iscsiuio.service... May 17 00:41:30.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.671136 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:41:30.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.671266 systemd[1]: Stopped iscsiuio.service. May 17 00:41:30.672028 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:41:30.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.672138 systemd[1]: Finished initrd-cleanup.service. May 17 00:41:30.681950 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:41:30.682114 systemd[1]: Stopped ignition-mount.service. May 17 00:41:30.682881 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:41:30.682967 systemd[1]: Stopped ignition-disks.service. May 17 00:41:30.683693 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:41:30.683776 systemd[1]: Stopped ignition-kargs.service. May 17 00:41:30.684317 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:41:30.684391 systemd[1]: Stopped ignition-fetch.service. May 17 00:41:30.685018 systemd[1]: Stopped target network.target. May 17 00:41:30.686242 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:41:30.686383 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:41:30.687514 systemd[1]: Stopped target paths.target. May 17 00:41:30.688121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:41:30.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.695755 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:41:30.696340 systemd[1]: Stopped target slices.target. May 17 00:41:30.696751 systemd[1]: Stopped target sockets.target. May 17 00:41:30.697145 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:41:30.697219 systemd[1]: Closed iscsid.socket. May 17 00:41:30.697609 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:41:30.697671 systemd[1]: Closed iscsiuio.socket. May 17 00:41:30.698018 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:41:30.698075 systemd[1]: Stopped ignition-setup.service. May 17 00:41:30.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.698660 systemd[1]: Stopping systemd-networkd.service... May 17 00:41:30.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.699307 systemd[1]: Stopping systemd-resolved.service... May 17 00:41:30.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.712000 audit: BPF prog-id=6 op=UNLOAD May 17 00:41:30.701228 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:41:30.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.703739 systemd-networkd[687]: eth0: DHCPv6 lease lost May 17 00:41:30.706827 systemd-networkd[687]: eth1: DHCPv6 lease lost May 17 00:41:30.715000 audit: BPF prog-id=9 op=UNLOAD May 17 00:41:30.708706 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:41:30.708869 systemd[1]: Stopped systemd-resolved.service. May 17 00:41:30.710322 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:41:30.710458 systemd[1]: Stopped systemd-networkd.service. May 17 00:41:30.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.711527 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:41:30.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.711885 systemd[1]: Stopped sysroot-boot.service. May 17 00:41:30.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.713366 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:41:30.713410 systemd[1]: Closed systemd-networkd.socket. May 17 00:41:30.713960 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:41:30.714018 systemd[1]: Stopped initrd-setup-root.service. May 17 00:41:30.715611 systemd[1]: Stopping network-cleanup.service... May 17 00:41:30.716361 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:41:30.716440 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:41:30.720209 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:41:30.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.720327 systemd[1]: Stopped systemd-sysctl.service. May 17 00:41:30.721262 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:41:30.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.721340 systemd[1]: Stopped systemd-modules-load.service. May 17 00:41:30.726077 systemd[1]: Stopping systemd-udevd.service... May 17 00:41:30.730312 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:41:30.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.733628 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:41:30.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.733882 systemd[1]: Stopped systemd-udevd.service. May 17 00:41:30.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.739002 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:41:30.739160 systemd[1]: Stopped network-cleanup.service. May 17 00:41:30.740312 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:41:30.740365 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:41:30.740876 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:41:30.740939 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:41:30.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.741413 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:41:30.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.741467 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:41:30.742165 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:41:30.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:30.742210 systemd[1]: Stopped dracut-cmdline.service. May 17 00:41:30.742868 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:41:30.742912 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:41:30.744650 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:41:30.745147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:41:30.745234 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:41:30.754171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:41:30.754243 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:41:30.754896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:41:30.754958 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:41:30.757436 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:41:30.758242 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:41:30.758360 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:41:30.759145 systemd[1]: Reached target initrd-switch-root.target. May 17 00:41:30.760827 systemd[1]: Starting initrd-switch-root.service... May 17 00:41:30.777143 systemd[1]: Switching root. May 17 00:41:30.800295 systemd-journald[183]: Journal stopped May 17 00:41:34.629533 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 17 00:41:34.632788 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:41:34.632843 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:41:34.632868 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:41:34.632887 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:41:34.632900 kernel: SELinux: policy capability open_perms=1 May 17 00:41:34.632920 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:41:34.632937 kernel: SELinux: policy capability always_check_network=0 May 17 00:41:34.632950 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:41:34.632962 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:41:34.632985 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:41:34.633000 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:41:34.633015 systemd[1]: Successfully loaded SELinux policy in 52.033ms. May 17 00:41:34.633046 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.984ms. May 17 00:41:34.633063 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:41:34.633076 systemd[1]: Detected virtualization kvm. May 17 00:41:34.633088 systemd[1]: Detected architecture x86-64. May 17 00:41:34.633101 systemd[1]: Detected first boot. May 17 00:41:34.633119 systemd[1]: Hostname set to . May 17 00:41:34.633135 systemd[1]: Initializing machine ID from VM UUID. May 17 00:41:34.633148 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:41:34.633161 systemd[1]: Populated /etc with preset unit settings. May 17 00:41:34.633175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:34.633190 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:34.633206 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:34.633221 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:41:34.633260 systemd[1]: Stopped initrd-switch-root.service. May 17 00:41:34.633279 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:41:34.633292 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:41:34.633305 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:41:34.633320 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:41:34.633336 systemd[1]: Created slice system-getty.slice. May 17 00:41:34.633349 systemd[1]: Created slice system-modprobe.slice. May 17 00:41:34.633362 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:41:34.633375 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:41:34.633390 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:41:34.633403 systemd[1]: Created slice user.slice. May 17 00:41:34.633416 systemd[1]: Started systemd-ask-password-console.path. May 17 00:41:34.633428 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:41:34.633441 systemd[1]: Set up automount boot.automount. May 17 00:41:34.633453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:41:34.633471 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:41:34.633504 systemd[1]: Stopped target initrd-fs.target. May 17 00:41:34.633525 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:41:34.633544 systemd[1]: Reached target integritysetup.target. May 17 00:41:34.633566 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:41:34.633593 systemd[1]: Reached target remote-fs.target. May 17 00:41:34.633606 systemd[1]: Reached target slices.target. May 17 00:41:34.633619 systemd[1]: Reached target swap.target. May 17 00:41:34.633632 systemd[1]: Reached target torcx.target. May 17 00:41:34.641079 systemd[1]: Reached target veritysetup.target. May 17 00:41:34.641114 systemd[1]: Listening on systemd-coredump.socket. May 17 00:41:34.641128 systemd[1]: Listening on systemd-initctl.socket. May 17 00:41:34.641142 systemd[1]: Listening on systemd-networkd.socket. May 17 00:41:34.641155 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:41:34.641169 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:41:34.641183 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:41:34.641202 systemd[1]: Mounting dev-hugepages.mount... May 17 00:41:34.641254 systemd[1]: Mounting dev-mqueue.mount... May 17 00:41:34.641291 systemd[1]: Mounting media.mount... May 17 00:41:34.641336 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:34.641365 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:41:34.641392 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:41:34.641419 systemd[1]: Mounting tmp.mount... May 17 00:41:34.641445 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:41:34.641472 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:34.641507 systemd[1]: Starting kmod-static-nodes.service... May 17 00:41:34.641525 systemd[1]: Starting modprobe@configfs.service... May 17 00:41:34.641543 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:34.641578 systemd[1]: Starting modprobe@drm.service... May 17 00:41:34.641603 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:34.641633 systemd[1]: Starting modprobe@fuse.service... May 17 00:41:34.648777 systemd[1]: Starting modprobe@loop.service... May 17 00:41:34.648806 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:34.648828 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:41:34.648844 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:41:34.648857 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:41:34.648870 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:41:34.648896 systemd[1]: Stopped systemd-journald.service. May 17 00:41:34.648910 systemd[1]: Starting systemd-journald.service... May 17 00:41:34.648922 systemd[1]: Starting systemd-modules-load.service... May 17 00:41:34.648934 systemd[1]: Starting systemd-network-generator.service... May 17 00:41:34.648946 systemd[1]: Starting systemd-remount-fs.service... May 17 00:41:34.648960 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:41:34.648982 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:41:34.649001 systemd[1]: Stopped verity-setup.service. May 17 00:41:34.649023 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:34.649051 systemd[1]: Mounted dev-hugepages.mount. May 17 00:41:34.649065 systemd[1]: Mounted dev-mqueue.mount. May 17 00:41:34.649078 systemd[1]: Mounted media.mount. May 17 00:41:34.649090 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:41:34.649103 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:41:34.649117 systemd[1]: Mounted tmp.mount. May 17 00:41:34.649137 systemd[1]: Finished kmod-static-nodes.service. May 17 00:41:34.649158 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:41:34.649178 systemd[1]: Finished modprobe@configfs.service. May 17 00:41:34.649199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:34.649220 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:34.649233 kernel: fuse: init (API version 7.34) May 17 00:41:34.649248 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:34.649265 systemd[1]: Finished modprobe@drm.service. May 17 00:41:34.649287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:34.649306 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:34.649326 systemd[1]: Finished systemd-modules-load.service. May 17 00:41:34.649341 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:41:34.649354 systemd[1]: Finished modprobe@fuse.service. May 17 00:41:34.649368 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:41:34.649382 kernel: loop: module loaded May 17 00:41:34.649395 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:41:34.649419 systemd-journald[954]: Journal started May 17 00:41:34.649542 systemd-journald[954]: Runtime Journal (/run/log/journal/7ecf2bd8b3dc4a5980bbbfec521380c4) is 4.9M, max 39.5M, 34.5M free. May 17 00:41:30.956000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:41:31.018000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:31.018000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:41:31.018000 audit: BPF prog-id=10 op=LOAD May 17 00:41:31.018000 audit: BPF prog-id=10 op=UNLOAD May 17 00:41:31.018000 audit: BPF prog-id=11 op=LOAD May 17 00:41:31.018000 audit: BPF prog-id=11 op=UNLOAD May 17 00:41:31.147000 audit[885]: AVC avc: denied { associate } for pid=885 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:41:31.147000 audit[885]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178c2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=867 pid=885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:31.147000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:41:31.149000 audit[885]: AVC avc: denied { associate } for pid=885 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:41:31.149000 audit[885]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000117999 a2=1ed a3=0 items=2 ppid=867 pid=885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:31.149000 audit: CWD cwd="/" May 17 00:41:31.149000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:31.149000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:31.149000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:41:34.398000 audit: BPF prog-id=12 op=LOAD May 17 00:41:34.398000 audit: BPF prog-id=3 op=UNLOAD May 17 00:41:34.399000 audit: BPF prog-id=13 op=LOAD May 17 00:41:34.399000 audit: BPF prog-id=14 op=LOAD May 17 00:41:34.399000 audit: BPF prog-id=4 op=UNLOAD May 17 00:41:34.399000 audit: BPF prog-id=5 op=UNLOAD May 17 00:41:34.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.408000 audit: BPF prog-id=12 op=UNLOAD May 17 00:41:34.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.554000 audit: BPF prog-id=15 op=LOAD May 17 00:41:34.554000 audit: BPF prog-id=16 op=LOAD May 17 00:41:34.554000 audit: BPF prog-id=17 op=LOAD May 17 00:41:34.554000 audit: BPF prog-id=13 op=UNLOAD May 17 00:41:34.554000 audit: BPF prog-id=14 op=UNLOAD May 17 00:41:34.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.624000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:41:34.624000 audit[954]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffec6f135f0 a2=4000 a3=7ffec6f1368c items=0 ppid=1 pid=954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:34.624000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:41:34.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.395157 systemd[1]: Queued start job for default target multi-user.target. May 17 00:41:31.143873 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:34.395184 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:41:31.144421 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:41:34.401536 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:41:31.144443 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:41:31.144502 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:41:34.663047 systemd[1]: Starting systemd-sysctl.service... May 17 00:41:34.663137 systemd[1]: Started systemd-journald.service. May 17 00:41:34.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:31.144518 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:41:34.661553 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:31.144575 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:41:34.661913 systemd[1]: Finished modprobe@loop.service. May 17 00:41:31.144591 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:41:34.664753 systemd[1]: Finished systemd-network-generator.service. May 17 00:41:31.144894 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:41:34.665516 systemd[1]: Finished systemd-remount-fs.service. May 17 00:41:31.144940 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:41:34.666348 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:41:31.144954 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:41:34.666898 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:41:31.147473 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:41:34.667536 systemd[1]: Reached target network-pre.target. May 17 00:41:31.147561 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:41:34.668059 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:34.693866 systemd-journald[954]: Time spent on flushing to /var/log/journal/7ecf2bd8b3dc4a5980bbbfec521380c4 is 54.808ms for 1133 entries. May 17 00:41:34.693866 systemd-journald[954]: System Journal (/var/log/journal/7ecf2bd8b3dc4a5980bbbfec521380c4) is 8.0M, max 195.6M, 187.6M free. May 17 00:41:34.763992 systemd-journald[954]: Received client request to flush runtime journal. May 17 00:41:34.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:31.147603 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:41:34.670984 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:41:31.147627 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:41:34.675106 systemd[1]: Starting systemd-journal-flush.service... May 17 00:41:31.147684 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:41:34.675792 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:31.147700 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:41:34.678503 systemd[1]: Starting systemd-random-seed.service... May 17 00:41:33.827476 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:33Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:34.679480 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:33.828036 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:33Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:34.702053 systemd[1]: Finished systemd-random-seed.service. May 17 00:41:33.828293 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:33Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:34.702725 systemd[1]: Reached target first-boot-complete.target. May 17 00:41:33.828685 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:33Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:41:34.724454 systemd[1]: Finished systemd-sysctl.service. May 17 00:41:34.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:33.828797 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:33Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:41:34.757961 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:41:33.828918 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2025-05-17T00:41:33Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:41:34.761651 systemd[1]: Starting systemd-sysusers.service... May 17 00:41:34.768371 systemd[1]: Finished systemd-journal-flush.service. May 17 00:41:34.805810 systemd[1]: Finished systemd-sysusers.service. May 17 00:41:34.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.808628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:41:34.828879 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:41:34.831588 systemd[1]: Starting systemd-udev-settle.service... May 17 00:41:34.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:34.848683 udevadm[996]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:41:34.855342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:41:34.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.512154 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:41:35.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.513000 audit: BPF prog-id=18 op=LOAD May 17 00:41:35.513000 audit: BPF prog-id=19 op=LOAD May 17 00:41:35.513000 audit: BPF prog-id=7 op=UNLOAD May 17 00:41:35.513000 audit: BPF prog-id=8 op=UNLOAD May 17 00:41:35.515715 systemd[1]: Starting systemd-udevd.service... May 17 00:41:35.538201 systemd-udevd[997]: Using default interface naming scheme 'v252'. May 17 00:41:35.571619 systemd[1]: Started systemd-udevd.service. May 17 00:41:35.576954 kernel: kauditd_printk_skb: 102 callbacks suppressed May 17 00:41:35.577120 kernel: audit: type=1130 audit(1747442495.571:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.583359 kernel: audit: type=1334 audit(1747442495.579:142): prog-id=20 op=LOAD May 17 00:41:35.579000 audit: BPF prog-id=20 op=LOAD May 17 00:41:35.582202 systemd[1]: Starting systemd-networkd.service... May 17 00:41:35.598376 kernel: audit: type=1334 audit(1747442495.592:143): prog-id=21 op=LOAD May 17 00:41:35.598519 kernel: audit: type=1334 audit(1747442495.592:144): prog-id=22 op=LOAD May 17 00:41:35.598544 kernel: audit: type=1334 audit(1747442495.592:145): prog-id=23 op=LOAD May 17 00:41:35.592000 audit: BPF prog-id=21 op=LOAD May 17 00:41:35.592000 audit: BPF prog-id=22 op=LOAD May 17 00:41:35.592000 audit: BPF prog-id=23 op=LOAD May 17 00:41:35.597279 systemd[1]: Starting systemd-userdbd.service... May 17 00:41:35.669861 kernel: audit: type=1130 audit(1747442495.664:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.664375 systemd[1]: Started systemd-userdbd.service. May 17 00:41:35.674431 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:35.674819 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:35.677788 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:35.682669 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:35.686351 systemd[1]: Starting modprobe@loop.service... May 17 00:41:35.703051 kernel: audit: type=1130 audit(1747442495.690:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.703197 kernel: audit: type=1131 audit(1747442495.694:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.703232 kernel: audit: type=1130 audit(1747442495.698:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.709571 kernel: audit: type=1131 audit(1747442495.698:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.689253 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:41:35.689368 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:41:35.689618 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:35.690569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:35.690928 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:35.698982 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:35.699187 systemd[1]: Finished modprobe@loop.service. May 17 00:41:35.702968 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.720472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:35.720685 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:35.722473 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:35.732785 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:41:35.800704 systemd-networkd[1010]: lo: Link UP May 17 00:41:35.802367 systemd-networkd[1010]: lo: Gained carrier May 17 00:41:35.803728 systemd-networkd[1010]: Enumeration completed May 17 00:41:35.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:35.804077 systemd[1]: Started systemd-networkd.service. May 17 00:41:35.805122 systemd-networkd[1010]: eth1: Configuring with /run/systemd/network/10-aa:8c:e5:31:5f:b4.network. May 17 00:41:35.807773 systemd-networkd[1010]: eth0: Configuring with /run/systemd/network/10-ba:a0:93:ae:7d:ac.network. May 17 00:41:35.808894 systemd-networkd[1010]: eth1: Link UP May 17 00:41:35.809039 systemd-networkd[1010]: eth1: Gained carrier May 17 00:41:35.813126 systemd-networkd[1010]: eth0: Link UP May 17 00:41:35.813137 systemd-networkd[1010]: eth0: Gained carrier May 17 00:41:35.818778 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:41:35.825669 kernel: ACPI: button: Power Button [PWRF] May 17 00:41:35.859766 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:41:35.879000 audit[1009]: AVC avc: denied { confidentiality } for pid=1009 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:41:35.879000 audit[1009]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5648dc6e10b0 a1=338ac a2=7f1ae3c9ebc5 a3=5 items=110 ppid=997 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:35.879000 audit: CWD cwd="/" May 17 00:41:35.879000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=1 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=2 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=3 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=4 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=5 name=(null) inode=14493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=6 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=7 name=(null) inode=14494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=8 name=(null) inode=14494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=9 name=(null) inode=14495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=10 name=(null) inode=14494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=11 name=(null) inode=14496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=12 name=(null) inode=14494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=13 name=(null) inode=14497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=14 name=(null) inode=14494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=15 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=16 name=(null) inode=14494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=17 name=(null) inode=14499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=18 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=19 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=20 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=21 name=(null) inode=14501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=22 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=23 name=(null) inode=14502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=24 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=25 name=(null) inode=14503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=26 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=27 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=28 name=(null) inode=14500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=29 name=(null) inode=14505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=30 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=31 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=32 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=33 name=(null) inode=14507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=34 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=35 name=(null) inode=14508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=36 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=37 name=(null) inode=14509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=38 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=39 name=(null) inode=14510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=40 name=(null) inode=14506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=41 name=(null) inode=14511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=42 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=43 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=44 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=45 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=46 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=47 name=(null) inode=14514 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=48 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=49 name=(null) inode=14515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=50 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=51 name=(null) inode=14516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=52 name=(null) inode=14512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=53 name=(null) inode=14517 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=55 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=56 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=57 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=58 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=59 name=(null) inode=14520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=60 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=61 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=62 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=63 name=(null) inode=14522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=64 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=65 name=(null) inode=14523 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=66 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=67 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=68 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=69 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=70 name=(null) inode=14521 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=71 name=(null) inode=14526 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=72 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=73 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=74 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=75 name=(null) inode=14528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=76 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=77 name=(null) inode=14529 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=78 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=79 name=(null) inode=14530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=80 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=81 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=82 name=(null) inode=14527 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=83 name=(null) inode=14532 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=84 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=85 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=86 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=87 name=(null) inode=14534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=88 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=89 name=(null) inode=14535 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=90 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=91 name=(null) inode=14536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=92 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=93 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=94 name=(null) inode=14533 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=95 name=(null) inode=14538 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=96 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=97 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=98 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=99 name=(null) inode=14540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=100 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=101 name=(null) inode=14541 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=102 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=103 name=(null) inode=14542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=104 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=105 name=(null) inode=14543 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=106 name=(null) inode=14539 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=107 name=(null) inode=14544 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PATH item=109 name=(null) inode=14545 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:41:35.879000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:41:35.914763 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:41:35.930701 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:41:35.943679 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:41:36.054669 kernel: EDAC MC: Ver: 3.0.0 May 17 00:41:36.078560 systemd[1]: Finished systemd-udev-settle.service. May 17 00:41:36.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.081433 systemd[1]: Starting lvm2-activation-early.service... May 17 00:41:36.100704 lvm[1035]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:36.128322 systemd[1]: Finished lvm2-activation-early.service. May 17 00:41:36.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.129123 systemd[1]: Reached target cryptsetup.target. May 17 00:41:36.131774 systemd[1]: Starting lvm2-activation.service... May 17 00:41:36.138594 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:41:36.171678 systemd[1]: Finished lvm2-activation.service. May 17 00:41:36.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.180627 systemd[1]: Reached target local-fs-pre.target. May 17 00:41:36.195933 systemd[1]: Mounting media-configdrive.mount... May 17 00:41:36.196525 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:41:36.196621 systemd[1]: Reached target machines.target. May 17 00:41:36.198564 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:41:36.213654 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:41:36.214826 systemd[1]: Mounted media-configdrive.mount. May 17 00:41:36.215337 systemd[1]: Reached target local-fs.target. May 17 00:41:36.217184 systemd[1]: Starting ldconfig.service... May 17 00:41:36.218353 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:36.218448 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:36.220027 systemd[1]: Starting systemd-boot-update.service... May 17 00:41:36.222492 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:41:36.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.225746 systemd[1]: Starting systemd-sysext.service... May 17 00:41:36.226905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:41:36.239700 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) May 17 00:41:36.241887 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:41:36.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.273632 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:41:36.276715 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:41:36.278126 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:41:36.288519 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:41:36.288831 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:41:36.306678 kernel: loop0: detected capacity change from 0 to 229808 May 17 00:41:36.337673 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:41:36.361842 kernel: loop1: detected capacity change from 0 to 229808 May 17 00:41:36.380039 (sd-sysext)[1052]: Using extensions 'kubernetes'. May 17 00:41:36.380750 (sd-sysext)[1052]: Merged extensions into '/usr'. May 17 00:41:36.388012 systemd-fsck[1048]: fsck.fat 4.2 (2021-01-31) May 17 00:41:36.388012 systemd-fsck[1048]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:41:36.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.393205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:41:36.396832 systemd[1]: Mounting boot.mount... May 17 00:41:36.424348 systemd[1]: Mounted boot.mount. May 17 00:41:36.431064 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:36.433146 systemd[1]: Mounting usr-share-oem.mount... May 17 00:41:36.433955 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:36.438292 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:36.441763 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:36.444249 systemd[1]: Starting modprobe@loop.service... May 17 00:41:36.445984 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:36.446254 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:36.447136 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:36.450188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:36.450464 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:36.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.456671 systemd[1]: Mounted usr-share-oem.mount. May 17 00:41:36.459486 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:36.460775 systemd[1]: Finished systemd-sysext.service. May 17 00:41:36.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.461493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:36.461730 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:36.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.462661 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:36.462832 systemd[1]: Finished modprobe@loop.service. May 17 00:41:36.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.467698 systemd[1]: Starting ensure-sysext.service... May 17 00:41:36.468344 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:36.470083 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:41:36.476719 systemd[1]: Finished systemd-boot-update.service. May 17 00:41:36.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.481750 systemd[1]: Reloading. May 17 00:41:36.502859 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:41:36.505412 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:41:36.510808 systemd-tmpfiles[1061]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:41:36.624342 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:41:36.698525 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-05-17T00:41:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:41:36.698570 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-05-17T00:41:36Z" level=info msg="torcx already run" May 17 00:41:36.830080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:41:36.830114 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:41:36.859982 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:41:36.864065 systemd-networkd[1010]: eth1: Gained IPv6LL May 17 00:41:36.931000 audit: BPF prog-id=24 op=LOAD May 17 00:41:36.932000 audit: BPF prog-id=25 op=LOAD May 17 00:41:36.932000 audit: BPF prog-id=18 op=UNLOAD May 17 00:41:36.932000 audit: BPF prog-id=19 op=UNLOAD May 17 00:41:36.933000 audit: BPF prog-id=26 op=LOAD May 17 00:41:36.933000 audit: BPF prog-id=15 op=UNLOAD May 17 00:41:36.934000 audit: BPF prog-id=27 op=LOAD May 17 00:41:36.934000 audit: BPF prog-id=28 op=LOAD May 17 00:41:36.934000 audit: BPF prog-id=16 op=UNLOAD May 17 00:41:36.934000 audit: BPF prog-id=17 op=UNLOAD May 17 00:41:36.935000 audit: BPF prog-id=29 op=LOAD May 17 00:41:36.935000 audit: BPF prog-id=20 op=UNLOAD May 17 00:41:36.936000 audit: BPF prog-id=30 op=LOAD May 17 00:41:36.936000 audit: BPF prog-id=21 op=UNLOAD May 17 00:41:36.937000 audit: BPF prog-id=31 op=LOAD May 17 00:41:36.937000 audit: BPF prog-id=32 op=LOAD May 17 00:41:36.937000 audit: BPF prog-id=22 op=UNLOAD May 17 00:41:36.937000 audit: BPF prog-id=23 op=UNLOAD May 17 00:41:36.943183 systemd[1]: Finished ldconfig.service. May 17 00:41:36.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.945763 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:41:36.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.951326 systemd[1]: Starting audit-rules.service... May 17 00:41:36.953924 systemd[1]: Starting clean-ca-certificates.service... May 17 00:41:36.958108 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:41:36.963000 audit: BPF prog-id=33 op=LOAD May 17 00:41:36.966000 audit: BPF prog-id=34 op=LOAD May 17 00:41:36.965768 systemd[1]: Starting systemd-resolved.service... May 17 00:41:36.968709 systemd[1]: Starting systemd-timesyncd.service... May 17 00:41:36.972905 systemd[1]: Starting systemd-update-utmp.service... May 17 00:41:36.980206 systemd[1]: Finished clean-ca-certificates.service. May 17 00:41:36.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:36.987573 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:36.998000 audit[1133]: SYSTEM_BOOT pid=1133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:41:37.008295 systemd[1]: Finished systemd-update-utmp.service. May 17 00:41:37.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.016614 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:37.019135 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:37.023089 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:37.025682 systemd[1]: Starting modprobe@loop.service... May 17 00:41:37.026831 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:37.027031 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.027192 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:37.028343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:37.029217 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:37.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.032423 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:37.035160 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:41:37.035932 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:37.036112 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.036244 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:37.039079 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:37.039289 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:37.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.040305 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:37.040461 systemd[1]: Finished modprobe@loop.service. May 17 00:41:37.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.044026 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:41:37.048506 systemd[1]: Starting modprobe@drm.service... May 17 00:41:37.052339 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:41:37.056204 systemd[1]: Starting modprobe@loop.service... May 17 00:41:37.057837 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:41:37.058068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.061006 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:41:37.061681 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:41:37.067114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:41:37.067330 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:41:37.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.068410 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:41:37.068564 systemd[1]: Finished modprobe@drm.service. May 17 00:41:37.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.071514 systemd[1]: Finished ensure-sysext.service. May 17 00:41:37.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.081267 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:41:37.081470 systemd[1]: Finished modprobe@loop.service. May 17 00:41:37.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.082037 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:41:37.091469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:41:37.091709 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:41:37.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.092349 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:41:37.093115 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:41:37.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.096248 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:41:37.098647 systemd[1]: Starting systemd-update-done.service... May 17 00:41:37.120096 systemd[1]: Finished systemd-update-done.service. May 17 00:41:37.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:41:37.138000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:41:37.138000 audit[1157]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcffed3b30 a2=420 a3=0 items=0 ppid=1128 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:41:37.138000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:41:37.139262 augenrules[1157]: No rules May 17 00:41:37.140510 systemd[1]: Finished audit-rules.service. May 17 00:41:37.145440 systemd[1]: Started systemd-timesyncd.service. May 17 00:41:37.146077 systemd[1]: Reached target time-set.target. May 17 00:41:37.179952 systemd-resolved[1131]: Positive Trust Anchors: May 17 00:41:37.180515 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:41:37.180693 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:41:37.189414 systemd-resolved[1131]: Using system hostname 'ci-3510.3.7-n-b5ee3a085c'. May 17 00:41:37.192800 systemd[1]: Started systemd-resolved.service. May 17 00:41:37.193301 systemd[1]: Reached target network.target. May 17 00:41:37.193676 systemd[1]: Reached target network-online.target. May 17 00:41:37.193994 systemd[1]: Reached target nss-lookup.target. May 17 00:41:37.194301 systemd[1]: Reached target sysinit.target. May 17 00:41:37.194759 systemd[1]: Started motdgen.path. May 17 00:41:37.195186 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:41:37.195983 systemd[1]: Started logrotate.timer. May 17 00:41:37.196471 systemd[1]: Started mdadm.timer. May 17 00:41:37.196800 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:41:37.197235 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:41:37.197271 systemd[1]: Reached target paths.target. May 17 00:41:37.197570 systemd[1]: Reached target timers.target. May 17 00:41:37.198433 systemd[1]: Listening on dbus.socket. May 17 00:41:37.200321 systemd[1]: Starting docker.socket... May 17 00:41:37.204319 systemd[1]: Listening on sshd.socket. May 17 00:41:37.205034 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.205886 systemd[1]: Listening on docker.socket. May 17 00:41:37.206363 systemd[1]: Reached target sockets.target. May 17 00:41:37.206790 systemd[1]: Reached target basic.target. May 17 00:41:37.207266 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:37.207392 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:41:37.208880 systemd[1]: Starting containerd.service... May 17 00:41:37.211011 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:41:37.213688 systemd[1]: Starting dbus.service... May 17 00:41:37.215770 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:41:37.218267 systemd[1]: Starting extend-filesystems.service... May 17 00:41:37.218785 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:41:37.224474 jq[1169]: false May 17 00:41:37.225878 systemd[1]: Starting kubelet.service... May 17 00:41:37.228480 systemd[1]: Starting motdgen.service... May 17 00:41:37.231253 systemd[1]: Starting prepare-helm.service... May 17 00:41:37.234321 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:41:37.237328 systemd[1]: Starting sshd-keygen.service... May 17 00:41:37.244891 systemd[1]: Starting systemd-logind.service... May 17 00:41:37.245406 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:41:37.245710 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:41:37.248005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:41:37.249158 systemd[1]: Starting update-engine.service... May 17 00:41:37.251679 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:41:37.256629 jq[1181]: true May 17 00:41:37.260823 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:41:37.261081 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:41:37.267562 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:37.267617 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:41:37.281586 tar[1183]: linux-amd64/LICENSE May 17 00:41:37.283094 tar[1183]: linux-amd64/helm May 17 00:41:37.299079 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:41:37.299287 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:41:37.306818 jq[1184]: true May 17 00:41:37.310978 dbus-daemon[1167]: [system] SELinux support is enabled May 17 00:41:37.314119 systemd[1]: Started dbus.service. May 17 00:41:37.317138 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:41:37.317184 systemd[1]: Reached target system-config.target. May 17 00:41:37.317660 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:41:37.317689 systemd[1]: Reached target user-config.target. May 17 00:41:37.322215 extend-filesystems[1171]: Found loop1 May 17 00:41:37.323853 extend-filesystems[1171]: Found vda May 17 00:41:37.324831 extend-filesystems[1171]: Found vda1 May 17 00:41:37.336832 extend-filesystems[1171]: Found vda2 May 17 00:41:37.338846 extend-filesystems[1171]: Found vda3 May 17 00:41:37.339581 extend-filesystems[1171]: Found usr May 17 00:41:37.343262 extend-filesystems[1171]: Found vda4 May 17 00:41:37.345257 extend-filesystems[1171]: Found vda6 May 17 00:41:37.346877 extend-filesystems[1171]: Found vda7 May 17 00:41:37.349286 extend-filesystems[1171]: Found vda9 May 17 00:41:37.349286 extend-filesystems[1171]: Checking size of /dev/vda9 May 17 00:41:37.889381 systemd-timesyncd[1132]: Contacted time server 5.161.111.190:123 (0.flatcar.pool.ntp.org). May 17 00:41:37.889484 systemd-timesyncd[1132]: Initial clock synchronization to Sat 2025-05-17 00:41:37.889089 UTC. May 17 00:41:37.889591 systemd-resolved[1131]: Clock change detected. Flushing caches. May 17 00:41:37.925168 extend-filesystems[1171]: Resized partition /dev/vda9 May 17 00:41:37.941979 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:41:37.942204 systemd[1]: Finished motdgen.service. May 17 00:41:37.948115 extend-filesystems[1215]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:41:37.962551 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:41:37.981994 update_engine[1180]: I0517 00:41:37.981487 1180 main.cc:92] Flatcar Update Engine starting May 17 00:41:37.986697 systemd[1]: Started update-engine.service. May 17 00:41:37.989654 systemd[1]: Started locksmithd.service. May 17 00:41:37.991303 update_engine[1180]: I0517 00:41:37.991254 1180 update_check_scheduler.cc:74] Next update check in 10m13s May 17 00:41:38.037057 bash[1223]: Updated "/home/core/.ssh/authorized_keys" May 17 00:41:38.038377 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:41:38.066915 env[1190]: time="2025-05-17T00:41:38.066840388Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:41:38.116564 systemd-logind[1179]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:41:38.118668 systemd-logind[1179]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:41:38.120275 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:41:38.121100 systemd-logind[1179]: New seat seat0. May 17 00:41:38.124219 systemd[1]: Started systemd-logind.service. May 17 00:41:38.141855 coreos-metadata[1166]: May 17 00:41:38.141 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:38.143383 extend-filesystems[1215]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:41:38.143383 extend-filesystems[1215]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:41:38.143383 extend-filesystems[1215]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:41:38.145745 extend-filesystems[1171]: Resized filesystem in /dev/vda9 May 17 00:41:38.145745 extend-filesystems[1171]: Found vdb May 17 00:41:38.145934 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:41:38.146168 systemd[1]: Finished extend-filesystems.service. May 17 00:41:38.162667 coreos-metadata[1166]: May 17 00:41:38.162 INFO Fetch successful May 17 00:41:38.174668 unknown[1166]: wrote ssh authorized keys file for user: core May 17 00:41:38.181752 env[1190]: time="2025-05-17T00:41:38.181698939Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:41:38.185209 env[1190]: time="2025-05-17T00:41:38.185157999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:38.188081 update-ssh-keys[1229]: Updated "/home/core/.ssh/authorized_keys" May 17 00:41:38.188707 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:41:38.189193 env[1190]: time="2025-05-17T00:41:38.189149245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:38.192113 env[1190]: time="2025-05-17T00:41:38.192053427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:38.192670 env[1190]: time="2025-05-17T00:41:38.192641780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:38.192785 env[1190]: time="2025-05-17T00:41:38.192768296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:41:38.192898 env[1190]: time="2025-05-17T00:41:38.192879847Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:41:38.192989 env[1190]: time="2025-05-17T00:41:38.192975884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:41:38.193162 env[1190]: time="2025-05-17T00:41:38.193146667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:38.193668 env[1190]: time="2025-05-17T00:41:38.193648019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:41:38.193973 env[1190]: time="2025-05-17T00:41:38.193940721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:41:38.194062 env[1190]: time="2025-05-17T00:41:38.194047682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:41:38.194194 env[1190]: time="2025-05-17T00:41:38.194172427Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:41:38.194286 env[1190]: time="2025-05-17T00:41:38.194271329Z" level=info msg="metadata content store policy set" policy=shared May 17 00:41:38.197876 env[1190]: time="2025-05-17T00:41:38.197822347Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:41:38.198080 env[1190]: time="2025-05-17T00:41:38.198059489Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:41:38.198169 env[1190]: time="2025-05-17T00:41:38.198154588Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:41:38.198299 env[1190]: time="2025-05-17T00:41:38.198284604Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:41:38.198424 env[1190]: time="2025-05-17T00:41:38.198408430Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:41:38.198514 env[1190]: time="2025-05-17T00:41:38.198489773Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:41:38.198603 env[1190]: time="2025-05-17T00:41:38.198589400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:41:38.198702 env[1190]: time="2025-05-17T00:41:38.198686604Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:41:38.198810 env[1190]: time="2025-05-17T00:41:38.198793726Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:41:38.198960 env[1190]: time="2025-05-17T00:41:38.198945310Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:41:38.199042 env[1190]: time="2025-05-17T00:41:38.199028729Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:41:38.199119 env[1190]: time="2025-05-17T00:41:38.199106729Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:41:38.199360 env[1190]: time="2025-05-17T00:41:38.199339496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:41:38.199576 env[1190]: time="2025-05-17T00:41:38.199559228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:41:38.199999 env[1190]: time="2025-05-17T00:41:38.199967183Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:41:38.200114 env[1190]: time="2025-05-17T00:41:38.200098459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200202 env[1190]: time="2025-05-17T00:41:38.200184858Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:41:38.200320 env[1190]: time="2025-05-17T00:41:38.200305834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200445 env[1190]: time="2025-05-17T00:41:38.200431004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200527 env[1190]: time="2025-05-17T00:41:38.200498199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200591 env[1190]: time="2025-05-17T00:41:38.200578030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200652 env[1190]: time="2025-05-17T00:41:38.200639198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200725 env[1190]: time="2025-05-17T00:41:38.200711288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200841 env[1190]: time="2025-05-17T00:41:38.200815153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:41:38.200926 env[1190]: time="2025-05-17T00:41:38.200911668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:41:38.201005 env[1190]: time="2025-05-17T00:41:38.200991238Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:41:38.201216 env[1190]: time="2025-05-17T00:41:38.201197511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:41:38.201303 env[1190]: time="2025-05-17T00:41:38.201288423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:41:38.201371 env[1190]: time="2025-05-17T00:41:38.201357076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:41:38.201432 env[1190]: time="2025-05-17T00:41:38.201419312Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:41:38.201499 env[1190]: time="2025-05-17T00:41:38.201483183Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:41:38.201577 env[1190]: time="2025-05-17T00:41:38.201563658Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:41:38.201657 env[1190]: time="2025-05-17T00:41:38.201642159Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:41:38.201748 env[1190]: time="2025-05-17T00:41:38.201733094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:41:38.202031 env[1190]: time="2025-05-17T00:41:38.201977990Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:41:38.204045 env[1190]: time="2025-05-17T00:41:38.202206104Z" level=info msg="Connect containerd service" May 17 00:41:38.204045 env[1190]: time="2025-05-17T00:41:38.202249498Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:41:38.204045 env[1190]: time="2025-05-17T00:41:38.202926073Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:41:38.204911 env[1190]: time="2025-05-17T00:41:38.204869701Z" level=info msg="Start subscribing containerd event" May 17 00:41:38.205021 env[1190]: time="2025-05-17T00:41:38.205005326Z" level=info msg="Start recovering state" May 17 00:41:38.205144 env[1190]: time="2025-05-17T00:41:38.205130981Z" level=info msg="Start event monitor" May 17 00:41:38.205230 env[1190]: time="2025-05-17T00:41:38.205215469Z" level=info msg="Start snapshots syncer" May 17 00:41:38.205291 env[1190]: time="2025-05-17T00:41:38.205279006Z" level=info msg="Start cni network conf syncer for default" May 17 00:41:38.205361 env[1190]: time="2025-05-17T00:41:38.205347274Z" level=info msg="Start streaming server" May 17 00:41:38.205642 env[1190]: time="2025-05-17T00:41:38.203212039Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:41:38.205801 env[1190]: time="2025-05-17T00:41:38.205785128Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:41:38.206628 env[1190]: time="2025-05-17T00:41:38.206605839Z" level=info msg="containerd successfully booted in 0.140620s" May 17 00:41:38.206778 systemd[1]: Started containerd.service. May 17 00:41:38.231181 systemd-networkd[1010]: eth0: Gained IPv6LL May 17 00:41:38.725222 systemd[1]: Created slice system-sshd.slice. May 17 00:41:38.800105 tar[1183]: linux-amd64/README.md May 17 00:41:38.808400 systemd[1]: Finished prepare-helm.service. May 17 00:41:39.019407 locksmithd[1224]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:41:39.498881 systemd[1]: Started kubelet.service. May 17 00:41:40.226708 kubelet[1241]: E0517 00:41:40.226636 1241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:40.229310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:40.229526 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:40.229853 systemd[1]: kubelet.service: Consumed 1.422s CPU time. May 17 00:41:40.274384 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:41:40.304569 systemd[1]: Finished sshd-keygen.service. May 17 00:41:40.307468 systemd[1]: Starting issuegen.service... May 17 00:41:40.309797 systemd[1]: Started sshd@0-64.23.137.34:22-147.75.109.163:47580.service. May 17 00:41:40.325305 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:41:40.325574 systemd[1]: Finished issuegen.service. May 17 00:41:40.331610 systemd[1]: Starting systemd-user-sessions.service... May 17 00:41:40.344284 systemd[1]: Finished systemd-user-sessions.service. May 17 00:41:40.347821 systemd[1]: Started getty@tty1.service. May 17 00:41:40.351470 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:41:40.352706 systemd[1]: Reached target getty.target. May 17 00:41:40.353331 systemd[1]: Reached target multi-user.target. May 17 00:41:40.356776 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:41:40.371785 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:41:40.372029 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:41:40.389671 systemd[1]: Startup finished in 931ms (kernel) + 6.103s (initrd) + 8.961s (userspace) = 15.997s. May 17 00:41:40.431012 sshd[1254]: Accepted publickey for core from 147.75.109.163 port 47580 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.434756 sshd[1254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.449989 systemd[1]: Created slice user-500.slice. May 17 00:41:40.452271 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:41:40.461727 systemd-logind[1179]: New session 1 of user core. May 17 00:41:40.469363 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:41:40.473241 systemd[1]: Starting user@500.service... May 17 00:41:40.478760 (systemd)[1264]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.576136 systemd[1264]: Queued start job for default target default.target. May 17 00:41:40.576831 systemd[1264]: Reached target paths.target. May 17 00:41:40.576855 systemd[1264]: Reached target sockets.target. May 17 00:41:40.576869 systemd[1264]: Reached target timers.target. May 17 00:41:40.576881 systemd[1264]: Reached target basic.target. May 17 00:41:40.576938 systemd[1264]: Reached target default.target. May 17 00:41:40.576974 systemd[1264]: Startup finished in 88ms. May 17 00:41:40.577924 systemd[1]: Started user@500.service. May 17 00:41:40.581068 systemd[1]: Started session-1.scope. May 17 00:41:40.657699 systemd[1]: Started sshd@1-64.23.137.34:22-147.75.109.163:47588.service. May 17 00:41:40.697629 sshd[1273]: Accepted publickey for core from 147.75.109.163 port 47588 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.701112 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.707933 systemd[1]: Started session-2.scope. May 17 00:41:40.708606 systemd-logind[1179]: New session 2 of user core. May 17 00:41:40.776285 sshd[1273]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.782339 systemd[1]: sshd@1-64.23.137.34:22-147.75.109.163:47588.service: Deactivated successfully. May 17 00:41:40.783682 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:41:40.785216 systemd-logind[1179]: Session 2 logged out. Waiting for processes to exit. May 17 00:41:40.788412 systemd[1]: Started sshd@2-64.23.137.34:22-147.75.109.163:47604.service. May 17 00:41:40.790550 systemd-logind[1179]: Removed session 2. May 17 00:41:40.833422 sshd[1279]: Accepted publickey for core from 147.75.109.163 port 47604 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.836452 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.843451 systemd[1]: Started session-3.scope. May 17 00:41:40.844637 systemd-logind[1179]: New session 3 of user core. May 17 00:41:40.906681 sshd[1279]: pam_unix(sshd:session): session closed for user core May 17 00:41:40.914961 systemd[1]: sshd@2-64.23.137.34:22-147.75.109.163:47604.service: Deactivated successfully. May 17 00:41:40.915999 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:41:40.916903 systemd-logind[1179]: Session 3 logged out. Waiting for processes to exit. May 17 00:41:40.920029 systemd[1]: Started sshd@3-64.23.137.34:22-147.75.109.163:47612.service. May 17 00:41:40.921909 systemd-logind[1179]: Removed session 3. May 17 00:41:40.973315 sshd[1285]: Accepted publickey for core from 147.75.109.163 port 47612 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:40.975694 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:40.983636 systemd-logind[1179]: New session 4 of user core. May 17 00:41:40.983653 systemd[1]: Started session-4.scope. May 17 00:41:41.056177 sshd[1285]: pam_unix(sshd:session): session closed for user core May 17 00:41:41.061606 systemd[1]: sshd@3-64.23.137.34:22-147.75.109.163:47612.service: Deactivated successfully. May 17 00:41:41.062548 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:41:41.063372 systemd-logind[1179]: Session 4 logged out. Waiting for processes to exit. May 17 00:41:41.065639 systemd[1]: Started sshd@4-64.23.137.34:22-147.75.109.163:47618.service. May 17 00:41:41.067583 systemd-logind[1179]: Removed session 4. May 17 00:41:41.114188 sshd[1291]: Accepted publickey for core from 147.75.109.163 port 47618 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:41:41.116756 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:41:41.124653 systemd[1]: Started session-5.scope. May 17 00:41:41.125708 systemd-logind[1179]: New session 5 of user core. May 17 00:41:41.206709 sudo[1295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:41:41.207784 sudo[1295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:41:41.261136 systemd[1]: Starting docker.service... May 17 00:41:41.342782 env[1305]: time="2025-05-17T00:41:41.341612791Z" level=info msg="Starting up" May 17 00:41:41.344868 env[1305]: time="2025-05-17T00:41:41.344811929Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:41.345091 env[1305]: time="2025-05-17T00:41:41.345065844Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:41.345256 env[1305]: time="2025-05-17T00:41:41.345226231Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:41.345349 env[1305]: time="2025-05-17T00:41:41.345331723Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:41.348107 env[1305]: time="2025-05-17T00:41:41.348057503Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:41:41.348330 env[1305]: time="2025-05-17T00:41:41.348306622Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:41:41.348428 env[1305]: time="2025-05-17T00:41:41.348408584Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:41:41.349684 env[1305]: time="2025-05-17T00:41:41.349654780Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:41:41.409783 env[1305]: time="2025-05-17T00:41:41.409728337Z" level=info msg="Loading containers: start." May 17 00:41:41.590571 kernel: Initializing XFRM netlink socket May 17 00:41:41.639717 env[1305]: time="2025-05-17T00:41:41.638605180Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:41:41.760490 systemd-networkd[1010]: docker0: Link UP May 17 00:41:41.780485 env[1305]: time="2025-05-17T00:41:41.780403398Z" level=info msg="Loading containers: done." May 17 00:41:41.797803 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1454885546-merged.mount: Deactivated successfully. May 17 00:41:41.803379 env[1305]: time="2025-05-17T00:41:41.803314647Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:41:41.804000 env[1305]: time="2025-05-17T00:41:41.803967260Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:41:41.804340 env[1305]: time="2025-05-17T00:41:41.804316464Z" level=info msg="Daemon has completed initialization" May 17 00:41:41.826865 systemd[1]: Started docker.service. May 17 00:41:41.840427 env[1305]: time="2025-05-17T00:41:41.840020723Z" level=info msg="API listen on /run/docker.sock" May 17 00:41:41.871752 systemd[1]: Starting coreos-metadata.service... May 17 00:41:41.930059 coreos-metadata[1422]: May 17 00:41:41.929 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:41:41.942600 coreos-metadata[1422]: May 17 00:41:41.942 INFO Fetch successful May 17 00:41:41.960328 systemd[1]: Finished coreos-metadata.service. May 17 00:41:42.822494 env[1190]: time="2025-05-17T00:41:42.822427505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:41:43.373220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608252026.mount: Deactivated successfully. May 17 00:41:45.368678 env[1190]: time="2025-05-17T00:41:45.368567976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.370833 env[1190]: time="2025-05-17T00:41:45.370766760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.374087 env[1190]: time="2025-05-17T00:41:45.373998607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.376653 env[1190]: time="2025-05-17T00:41:45.376589143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:45.378270 env[1190]: time="2025-05-17T00:41:45.378191401Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 17 00:41:45.379571 env[1190]: time="2025-05-17T00:41:45.379486763Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:41:47.301949 env[1190]: time="2025-05-17T00:41:47.301856317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.306017 env[1190]: time="2025-05-17T00:41:47.305915081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.308216 env[1190]: time="2025-05-17T00:41:47.308146489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.310763 env[1190]: time="2025-05-17T00:41:47.310710292Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:47.311911 env[1190]: time="2025-05-17T00:41:47.311850708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 17 00:41:47.312823 env[1190]: time="2025-05-17T00:41:47.312755013Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:41:48.961496 env[1190]: time="2025-05-17T00:41:48.961399285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.963490 env[1190]: time="2025-05-17T00:41:48.963418547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.965885 env[1190]: time="2025-05-17T00:41:48.965829995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.967958 env[1190]: time="2025-05-17T00:41:48.967900776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:48.969211 env[1190]: time="2025-05-17T00:41:48.969148680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 17 00:41:48.969946 env[1190]: time="2025-05-17T00:41:48.969910864Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:41:50.238302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659661174.mount: Deactivated successfully. May 17 00:41:50.240012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:41:50.240234 systemd[1]: Stopped kubelet.service. May 17 00:41:50.240310 systemd[1]: kubelet.service: Consumed 1.422s CPU time. May 17 00:41:50.244562 systemd[1]: Starting kubelet.service... May 17 00:41:50.399186 systemd[1]: Started kubelet.service. May 17 00:41:50.473991 kubelet[1444]: E0517 00:41:50.473915 1444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:41:50.477731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:41:50.477914 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:41:51.228190 env[1190]: time="2025-05-17T00:41:51.228092299Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:51.229810 env[1190]: time="2025-05-17T00:41:51.229745316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:51.231437 env[1190]: time="2025-05-17T00:41:51.231386055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:51.233162 env[1190]: time="2025-05-17T00:41:51.233100789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:51.234020 env[1190]: time="2025-05-17T00:41:51.233972777Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 00:41:51.234934 env[1190]: time="2025-05-17T00:41:51.234893411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:41:51.786890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463744176.mount: Deactivated successfully. May 17 00:41:52.954138 env[1190]: time="2025-05-17T00:41:52.954037650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.957466 env[1190]: time="2025-05-17T00:41:52.957382710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.960598 env[1190]: time="2025-05-17T00:41:52.960501030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.963142 env[1190]: time="2025-05-17T00:41:52.963059320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:52.964272 env[1190]: time="2025-05-17T00:41:52.964204355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 17 00:41:52.965327 env[1190]: time="2025-05-17T00:41:52.965281985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:41:53.452747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331848557.mount: Deactivated successfully. May 17 00:41:53.457428 env[1190]: time="2025-05-17T00:41:53.457329453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.458917 env[1190]: time="2025-05-17T00:41:53.458858914Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.460765 env[1190]: time="2025-05-17T00:41:53.460720573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.462707 env[1190]: time="2025-05-17T00:41:53.462658193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:53.463576 env[1190]: time="2025-05-17T00:41:53.463497365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:41:53.464278 env[1190]: time="2025-05-17T00:41:53.464238305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:41:56.430500 env[1190]: time="2025-05-17T00:41:56.430418600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.432614 env[1190]: time="2025-05-17T00:41:56.432552469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.440704 env[1190]: time="2025-05-17T00:41:56.440647129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.442840 env[1190]: time="2025-05-17T00:41:56.442769221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:56.444557 env[1190]: time="2025-05-17T00:41:56.444431351Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 17 00:42:00.137557 systemd[1]: Stopped kubelet.service. May 17 00:42:00.141082 systemd[1]: Starting kubelet.service... May 17 00:42:00.197971 systemd[1]: Reloading. May 17 00:42:00.383725 /usr/lib/systemd/system-generators/torcx-generator[1495]: time="2025-05-17T00:42:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:00.383799 /usr/lib/systemd/system-generators/torcx-generator[1495]: time="2025-05-17T00:42:00Z" level=info msg="torcx already run" May 17 00:42:00.547266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:00.547306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:00.580245 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:00.755188 systemd[1]: Stopping kubelet.service... May 17 00:42:00.756793 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:42:00.757392 systemd[1]: Stopped kubelet.service. May 17 00:42:00.762367 systemd[1]: Starting kubelet.service... May 17 00:42:00.978251 systemd[1]: Started kubelet.service. May 17 00:42:01.068492 kubelet[1549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:01.069558 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:42:01.069747 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:01.070031 kubelet[1549]: I0517 00:42:01.069971 1549 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:01.424282 kubelet[1549]: I0517 00:42:01.423396 1549 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:42:01.424558 kubelet[1549]: I0517 00:42:01.424524 1549 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:01.425216 kubelet[1549]: I0517 00:42:01.425177 1549 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:42:01.512332 kubelet[1549]: I0517 00:42:01.512258 1549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:01.512834 kubelet[1549]: E0517 00:42:01.512474 1549 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.137.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:42:01.529421 kubelet[1549]: E0517 00:42:01.529342 1549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:01.529421 kubelet[1549]: I0517 00:42:01.529402 1549 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:01.536595 kubelet[1549]: I0517 00:42:01.536445 1549 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:01.538054 kubelet[1549]: I0517 00:42:01.537987 1549 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:01.538673 kubelet[1549]: I0517 00:42:01.538333 1549 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-b5ee3a085c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:42:01.539133 kubelet[1549]: I0517 00:42:01.539101 1549 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:01.539337 kubelet[1549]: I0517 00:42:01.539317 1549 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:42:01.539751 kubelet[1549]: I0517 00:42:01.539727 1549 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:01.547407 kubelet[1549]: I0517 00:42:01.547329 1549 kubelet.go:480] "Attempting to sync node with API server" May 17 00:42:01.547807 kubelet[1549]: I0517 00:42:01.547770 1549 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:01.548017 kubelet[1549]: I0517 00:42:01.547995 1549 kubelet.go:386] "Adding apiserver pod source" May 17 00:42:01.548164 kubelet[1549]: I0517 00:42:01.548146 1549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:01.578825 kubelet[1549]: E0517 00:42:01.578729 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.137.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:42:01.579254 kubelet[1549]: I0517 00:42:01.578953 1549 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:01.579869 kubelet[1549]: I0517 00:42:01.579788 1549 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:42:01.581172 kubelet[1549]: W0517 00:42:01.581126 1549 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:42:01.582218 kubelet[1549]: E0517 00:42:01.582159 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.137.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b5ee3a085c&limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:42:01.588269 kubelet[1549]: I0517 00:42:01.588181 1549 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:42:01.588513 kubelet[1549]: I0517 00:42:01.588311 1549 server.go:1289] "Started kubelet" May 17 00:42:01.588772 kubelet[1549]: I0517 00:42:01.588709 1549 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:01.590668 kubelet[1549]: I0517 00:42:01.590625 1549 server.go:317] "Adding debug handlers to kubelet server" May 17 00:42:01.593978 kubelet[1549]: I0517 00:42:01.593860 1549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:01.594702 kubelet[1549]: I0517 00:42:01.594659 1549 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:01.597364 kubelet[1549]: E0517 00:42:01.594911 1549 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.137.34:6443/api/v1/namespaces/default/events\": dial tcp 64.23.137.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-b5ee3a085c.184029b70cadc114 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-b5ee3a085c,UID:ci-3510.3.7-n-b5ee3a085c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-b5ee3a085c,},FirstTimestamp:2025-05-17 00:42:01.58822018 +0000 UTC m=+0.598324325,LastTimestamp:2025-05-17 00:42:01.58822018 +0000 UTC m=+0.598324325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-b5ee3a085c,}" May 17 00:42:01.602116 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:42:01.602926 kubelet[1549]: I0517 00:42:01.602416 1549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:01.604176 kubelet[1549]: E0517 00:42:01.604142 1549 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:01.604754 kubelet[1549]: I0517 00:42:01.604721 1549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:01.611221 kubelet[1549]: E0517 00:42:01.611160 1549 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" May 17 00:42:01.611577 kubelet[1549]: I0517 00:42:01.611553 1549 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:42:01.612087 kubelet[1549]: I0517 00:42:01.612051 1549 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:42:01.612313 kubelet[1549]: I0517 00:42:01.612297 1549 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:01.613492 kubelet[1549]: E0517 00:42:01.613431 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.137.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b5ee3a085c?timeout=10s\": dial tcp 64.23.137.34:6443: connect: connection refused" interval="200ms" May 17 00:42:01.613738 kubelet[1549]: E0517 00:42:01.613700 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.137.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:42:01.614146 kubelet[1549]: I0517 00:42:01.614106 1549 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:01.615900 kubelet[1549]: I0517 00:42:01.615860 1549 factory.go:223] Registration of the containerd container factory successfully May 17 00:42:01.615900 kubelet[1549]: I0517 00:42:01.615887 1549 factory.go:223] Registration of the systemd container factory successfully May 17 00:42:01.620696 kubelet[1549]: I0517 00:42:01.620630 1549 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:42:01.662934 kubelet[1549]: I0517 00:42:01.662894 1549 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:42:01.663242 kubelet[1549]: I0517 00:42:01.663213 1549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:42:01.663381 kubelet[1549]: I0517 00:42:01.663367 1549 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:01.666228 kubelet[1549]: I0517 00:42:01.666187 1549 policy_none.go:49] "None policy: Start" May 17 00:42:01.666465 kubelet[1549]: I0517 00:42:01.666442 1549 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:42:01.666641 kubelet[1549]: I0517 00:42:01.666620 1549 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:01.668197 kubelet[1549]: I0517 00:42:01.668144 1549 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:42:01.668197 kubelet[1549]: I0517 00:42:01.668188 1549 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:42:01.668400 kubelet[1549]: I0517 00:42:01.668229 1549 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:42:01.668400 kubelet[1549]: I0517 00:42:01.668241 1549 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:42:01.668400 kubelet[1549]: E0517 00:42:01.668327 1549 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:01.678779 kubelet[1549]: E0517 00:42:01.675151 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.137.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:42:01.675579 systemd[1]: Created slice kubepods.slice. May 17 00:42:01.690456 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:42:01.698880 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:42:01.711425 kubelet[1549]: E0517 00:42:01.711346 1549 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:42:01.711854 kubelet[1549]: I0517 00:42:01.711812 1549 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:01.711947 kubelet[1549]: I0517 00:42:01.711848 1549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:01.715402 kubelet[1549]: I0517 00:42:01.715217 1549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:01.716072 kubelet[1549]: E0517 00:42:01.716032 1549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:42:01.716218 kubelet[1549]: E0517 00:42:01.716097 1549 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-b5ee3a085c\" not found" May 17 00:42:01.793362 systemd[1]: Created slice kubepods-burstable-podd9fd166df5c160069c02ad747e719d04.slice. May 17 00:42:01.815345 kubelet[1549]: E0517 00:42:01.815279 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.816243 kubelet[1549]: I0517 00:42:01.815299 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.817699 kubelet[1549]: I0517 00:42:01.817056 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.817699 kubelet[1549]: E0517 00:42:01.817608 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.137.34:6443/api/v1/nodes\": dial tcp 64.23.137.34:6443: connect: connection refused" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.817699 kubelet[1549]: E0517 00:42:01.815569 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.137.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b5ee3a085c?timeout=10s\": dial tcp 64.23.137.34:6443: connect: connection refused" interval="400ms" May 17 00:42:01.818871 kubelet[1549]: I0517 00:42:01.818832 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.819296 kubelet[1549]: I0517 00:42:01.819261 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9fd166df5c160069c02ad747e719d04-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" (UID: \"d9fd166df5c160069c02ad747e719d04\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.819636 kubelet[1549]: I0517 00:42:01.819604 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9fd166df5c160069c02ad747e719d04-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" (UID: \"d9fd166df5c160069c02ad747e719d04\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.819800 kubelet[1549]: I0517 00:42:01.819773 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.819989 kubelet[1549]: I0517 00:42:01.819966 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.820145 kubelet[1549]: I0517 00:42:01.820124 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.820281 kubelet[1549]: I0517 00:42:01.820260 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/051766354b015ea3c297fadc8887812a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-b5ee3a085c\" (UID: \"051766354b015ea3c297fadc8887812a\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.820498 kubelet[1549]: I0517 00:42:01.820409 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9fd166df5c160069c02ad747e719d04-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" (UID: \"d9fd166df5c160069c02ad747e719d04\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.820699 systemd[1]: Created slice kubepods-burstable-pod051766354b015ea3c297fadc8887812a.slice. May 17 00:42:01.824324 kubelet[1549]: E0517 00:42:01.824268 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:01.827789 systemd[1]: Created slice kubepods-burstable-podcde46b721823566f5cdbb90f5f694160.slice. May 17 00:42:01.830485 kubelet[1549]: E0517 00:42:01.830402 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:02.044622 kubelet[1549]: I0517 00:42:02.043765 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:02.044622 kubelet[1549]: E0517 00:42:02.044263 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.137.34:6443/api/v1/nodes\": dial tcp 64.23.137.34:6443: connect: connection refused" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:02.123460 kubelet[1549]: E0517 00:42:02.122822 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:02.124288 env[1190]: time="2025-05-17T00:42:02.124205754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-b5ee3a085c,Uid:d9fd166df5c160069c02ad747e719d04,Namespace:kube-system,Attempt:0,}" May 17 00:42:02.126168 kubelet[1549]: E0517 00:42:02.126121 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:02.128832 env[1190]: time="2025-05-17T00:42:02.127273535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-b5ee3a085c,Uid:051766354b015ea3c297fadc8887812a,Namespace:kube-system,Attempt:0,}" May 17 00:42:02.132015 kubelet[1549]: E0517 00:42:02.131864 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:02.133036 env[1190]: time="2025-05-17T00:42:02.132887545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-b5ee3a085c,Uid:cde46b721823566f5cdbb90f5f694160,Namespace:kube-system,Attempt:0,}" May 17 00:42:02.219851 kubelet[1549]: E0517 00:42:02.219765 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.137.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b5ee3a085c?timeout=10s\": dial tcp 64.23.137.34:6443: connect: connection refused" interval="800ms" May 17 00:42:02.446572 kubelet[1549]: I0517 00:42:02.446469 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:02.447238 kubelet[1549]: E0517 00:42:02.447167 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.137.34:6443/api/v1/nodes\": dial tcp 64.23.137.34:6443: connect: connection refused" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:02.562121 kubelet[1549]: E0517 00:42:02.562027 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.137.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-b5ee3a085c&limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:42:02.661904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507799346.mount: Deactivated successfully. May 17 00:42:02.668471 env[1190]: time="2025-05-17T00:42:02.668400365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.669918 env[1190]: time="2025-05-17T00:42:02.669855748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.671759 env[1190]: time="2025-05-17T00:42:02.671711062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.674065 env[1190]: time="2025-05-17T00:42:02.673992532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.676579 env[1190]: time="2025-05-17T00:42:02.675934610Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.680284 env[1190]: time="2025-05-17T00:42:02.680206228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.685081 env[1190]: time="2025-05-17T00:42:02.684955671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.687579 env[1190]: time="2025-05-17T00:42:02.686932040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.687823 kubelet[1549]: E0517 00:42:02.687479 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.137.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:42:02.688981 env[1190]: time="2025-05-17T00:42:02.688931134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.690692 env[1190]: time="2025-05-17T00:42:02.690577180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.692502 env[1190]: time="2025-05-17T00:42:02.692424625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.693935 env[1190]: time="2025-05-17T00:42:02.693882634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:02.745297 env[1190]: time="2025-05-17T00:42:02.743681295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:02.745680 env[1190]: time="2025-05-17T00:42:02.743842188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:02.745680 env[1190]: time="2025-05-17T00:42:02.743864232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:02.745680 env[1190]: time="2025-05-17T00:42:02.744346040Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61d606bc3d1c7a83415b43460f045a7ff15d643593ff5f81297faff94393bbc4 pid=1601 runtime=io.containerd.runc.v2 May 17 00:42:02.748305 env[1190]: time="2025-05-17T00:42:02.743795265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:02.748800 env[1190]: time="2025-05-17T00:42:02.748718164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:02.749094 env[1190]: time="2025-05-17T00:42:02.749020410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:02.749683 env[1190]: time="2025-05-17T00:42:02.749619663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5c053f09599151df57b433cd67cc4b56f8d8381aeaea3a951cebc25af3666c5 pid=1602 runtime=io.containerd.runc.v2 May 17 00:42:02.769975 env[1190]: time="2025-05-17T00:42:02.769821067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:02.769975 env[1190]: time="2025-05-17T00:42:02.769889448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:02.769975 env[1190]: time="2025-05-17T00:42:02.769907326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:02.771886 env[1190]: time="2025-05-17T00:42:02.771758923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/acf1c343b1df795b3fae878297a441e991cc76edf7d9c531e3e5f9a6519a1723 pid=1628 runtime=io.containerd.runc.v2 May 17 00:42:02.785615 systemd[1]: Started cri-containerd-61d606bc3d1c7a83415b43460f045a7ff15d643593ff5f81297faff94393bbc4.scope. May 17 00:42:02.815170 systemd[1]: Started cri-containerd-f5c053f09599151df57b433cd67cc4b56f8d8381aeaea3a951cebc25af3666c5.scope. May 17 00:42:02.846718 systemd[1]: Started cri-containerd-acf1c343b1df795b3fae878297a441e991cc76edf7d9c531e3e5f9a6519a1723.scope. May 17 00:42:02.915890 env[1190]: time="2025-05-17T00:42:02.915623054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-b5ee3a085c,Uid:d9fd166df5c160069c02ad747e719d04,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d606bc3d1c7a83415b43460f045a7ff15d643593ff5f81297faff94393bbc4\"" May 17 00:42:02.918234 kubelet[1549]: E0517 00:42:02.917974 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:02.928186 env[1190]: time="2025-05-17T00:42:02.928118336Z" level=info msg="CreateContainer within sandbox \"61d606bc3d1c7a83415b43460f045a7ff15d643593ff5f81297faff94393bbc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:42:02.933874 kubelet[1549]: E0517 00:42:02.930195 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.137.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:42:02.948043 env[1190]: time="2025-05-17T00:42:02.947967313Z" level=info msg="CreateContainer within sandbox \"61d606bc3d1c7a83415b43460f045a7ff15d643593ff5f81297faff94393bbc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"17a2b331750ba68ddf0c86edafc832eca0c572bad65158d2116e45d2d092c54c\"" May 17 00:42:02.949210 env[1190]: time="2025-05-17T00:42:02.949156458Z" level=info msg="StartContainer for \"17a2b331750ba68ddf0c86edafc832eca0c572bad65158d2116e45d2d092c54c\"" May 17 00:42:02.957571 env[1190]: time="2025-05-17T00:42:02.955863808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-b5ee3a085c,Uid:051766354b015ea3c297fadc8887812a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5c053f09599151df57b433cd67cc4b56f8d8381aeaea3a951cebc25af3666c5\"" May 17 00:42:02.957811 kubelet[1549]: E0517 00:42:02.957328 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:02.961792 env[1190]: time="2025-05-17T00:42:02.961725367Z" level=info msg="CreateContainer within sandbox \"f5c053f09599151df57b433cd67cc4b56f8d8381aeaea3a951cebc25af3666c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:42:02.975082 env[1190]: time="2025-05-17T00:42:02.974899612Z" level=info msg="CreateContainer within sandbox \"f5c053f09599151df57b433cd67cc4b56f8d8381aeaea3a951cebc25af3666c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a32207d5b9b44794d511cf5e972d5884b35f291e30523a708771bad2b34cd5c\"" May 17 00:42:02.978788 env[1190]: time="2025-05-17T00:42:02.978717337Z" level=info msg="StartContainer for \"0a32207d5b9b44794d511cf5e972d5884b35f291e30523a708771bad2b34cd5c\"" May 17 00:42:02.984666 env[1190]: time="2025-05-17T00:42:02.984567244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-b5ee3a085c,Uid:cde46b721823566f5cdbb90f5f694160,Namespace:kube-system,Attempt:0,} returns sandbox id \"acf1c343b1df795b3fae878297a441e991cc76edf7d9c531e3e5f9a6519a1723\"" May 17 00:42:02.987544 kubelet[1549]: E0517 00:42:02.987451 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:02.997256 env[1190]: time="2025-05-17T00:42:02.995775580Z" level=info msg="CreateContainer within sandbox \"acf1c343b1df795b3fae878297a441e991cc76edf7d9c531e3e5f9a6519a1723\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:42:03.009667 systemd[1]: Started cri-containerd-17a2b331750ba68ddf0c86edafc832eca0c572bad65158d2116e45d2d092c54c.scope. May 17 00:42:03.019124 env[1190]: time="2025-05-17T00:42:03.019057344Z" level=info msg="CreateContainer within sandbox \"acf1c343b1df795b3fae878297a441e991cc76edf7d9c531e3e5f9a6519a1723\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6cad149a44071a9b394cb670ba7cbd8fa7f0676a9b794fffd903dff079dc833\"" May 17 00:42:03.021378 kubelet[1549]: E0517 00:42:03.020699 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.137.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-b5ee3a085c?timeout=10s\": dial tcp 64.23.137.34:6443: connect: connection refused" interval="1.6s" May 17 00:42:03.023339 env[1190]: time="2025-05-17T00:42:03.023276817Z" level=info msg="StartContainer for \"b6cad149a44071a9b394cb670ba7cbd8fa7f0676a9b794fffd903dff079dc833\"" May 17 00:42:03.059658 systemd[1]: Started cri-containerd-0a32207d5b9b44794d511cf5e972d5884b35f291e30523a708771bad2b34cd5c.scope. May 17 00:42:03.095707 systemd[1]: Started cri-containerd-b6cad149a44071a9b394cb670ba7cbd8fa7f0676a9b794fffd903dff079dc833.scope. May 17 00:42:03.139943 env[1190]: time="2025-05-17T00:42:03.139873766Z" level=info msg="StartContainer for \"17a2b331750ba68ddf0c86edafc832eca0c572bad65158d2116e45d2d092c54c\" returns successfully" May 17 00:42:03.165192 env[1190]: time="2025-05-17T00:42:03.165114219Z" level=info msg="StartContainer for \"0a32207d5b9b44794d511cf5e972d5884b35f291e30523a708771bad2b34cd5c\" returns successfully" May 17 00:42:03.194569 kubelet[1549]: E0517 00:42:03.194480 1549 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.137.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:42:03.213494 env[1190]: time="2025-05-17T00:42:03.213428272Z" level=info msg="StartContainer for \"b6cad149a44071a9b394cb670ba7cbd8fa7f0676a9b794fffd903dff079dc833\" returns successfully" May 17 00:42:03.250089 kubelet[1549]: I0517 00:42:03.249934 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:03.250944 kubelet[1549]: E0517 00:42:03.250647 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.137.34:6443/api/v1/nodes\": dial tcp 64.23.137.34:6443: connect: connection refused" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:03.628653 kubelet[1549]: E0517 00:42:03.628461 1549 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.137.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.137.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:42:03.682533 kubelet[1549]: E0517 00:42:03.682458 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:03.682790 kubelet[1549]: E0517 00:42:03.682742 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:03.684738 kubelet[1549]: E0517 00:42:03.684690 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:03.685010 kubelet[1549]: E0517 00:42:03.684893 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:03.687568 kubelet[1549]: E0517 00:42:03.687495 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:03.687818 kubelet[1549]: E0517 00:42:03.687789 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:04.689861 kubelet[1549]: E0517 00:42:04.689809 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:04.690566 kubelet[1549]: E0517 00:42:04.690038 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:04.690566 kubelet[1549]: E0517 00:42:04.690469 1549 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:04.690680 kubelet[1549]: E0517 00:42:04.690649 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:04.852602 kubelet[1549]: I0517 00:42:04.852550 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:06.817867 kubelet[1549]: E0517 00:42:06.817796 1549 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-b5ee3a085c\" not found" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.023447 kubelet[1549]: I0517 00:42:07.023361 1549 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.023818 kubelet[1549]: E0517 00:42:07.023782 1549 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-b5ee3a085c\": node \"ci-3510.3.7-n-b5ee3a085c\" not found" May 17 00:42:07.114107 kubelet[1549]: I0517 00:42:07.113871 1549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.142816 kubelet[1549]: E0517 00:42:07.142733 1549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.142816 kubelet[1549]: I0517 00:42:07.142809 1549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.147414 kubelet[1549]: E0517 00:42:07.147341 1549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.147414 kubelet[1549]: I0517 00:42:07.147390 1549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.152301 kubelet[1549]: E0517 00:42:07.152176 1549 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-b5ee3a085c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.579086 kubelet[1549]: I0517 00:42:07.578981 1549 apiserver.go:52] "Watching apiserver" May 17 00:42:07.612712 kubelet[1549]: I0517 00:42:07.612573 1549 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:42:07.974549 kubelet[1549]: I0517 00:42:07.974471 1549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:07.985898 kubelet[1549]: I0517 00:42:07.985844 1549 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:42:07.986983 kubelet[1549]: E0517 00:42:07.986932 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:08.697295 kubelet[1549]: E0517 00:42:08.697247 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:09.019454 kubelet[1549]: I0517 00:42:09.019253 1549 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:09.028803 kubelet[1549]: I0517 00:42:09.028757 1549 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:42:09.029415 kubelet[1549]: E0517 00:42:09.029386 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:09.479745 systemd[1]: Reloading. May 17 00:42:09.613270 /usr/lib/systemd/system-generators/torcx-generator[1846]: time="2025-05-17T00:42:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:42:09.613307 /usr/lib/systemd/system-generators/torcx-generator[1846]: time="2025-05-17T00:42:09Z" level=info msg="torcx already run" May 17 00:42:09.698390 kubelet[1549]: E0517 00:42:09.698323 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:09.750474 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:42:09.751230 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:42:09.775748 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:42:09.931992 systemd[1]: Stopping kubelet.service... May 17 00:42:09.953332 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:42:09.953608 systemd[1]: Stopped kubelet.service. May 17 00:42:09.953690 systemd[1]: kubelet.service: Consumed 1.196s CPU time. May 17 00:42:09.956361 systemd[1]: Starting kubelet.service... May 17 00:42:11.140285 systemd[1]: Started kubelet.service. May 17 00:42:11.259105 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:11.259105 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:42:11.259105 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:42:11.259798 kubelet[1897]: I0517 00:42:11.259227 1897 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:42:11.281495 kubelet[1897]: I0517 00:42:11.280909 1897 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:42:11.281903 kubelet[1897]: I0517 00:42:11.281868 1897 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:42:11.283056 kubelet[1897]: I0517 00:42:11.283014 1897 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:42:11.286387 kubelet[1897]: I0517 00:42:11.286334 1897 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:42:11.300987 kubelet[1897]: I0517 00:42:11.300922 1897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:42:11.302278 sudo[1912]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:42:11.302766 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:42:11.328794 kubelet[1897]: E0517 00:42:11.328736 1897 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:42:11.329093 kubelet[1897]: I0517 00:42:11.329069 1897 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:42:11.333605 kubelet[1897]: I0517 00:42:11.333549 1897 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:42:11.334777 kubelet[1897]: I0517 00:42:11.334705 1897 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:42:11.335314 kubelet[1897]: I0517 00:42:11.335044 1897 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-b5ee3a085c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:42:11.335623 kubelet[1897]: I0517 00:42:11.335602 1897 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:42:11.335737 kubelet[1897]: I0517 00:42:11.335721 1897 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:42:11.335915 kubelet[1897]: I0517 00:42:11.335896 1897 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:11.336351 kubelet[1897]: I0517 00:42:11.336317 1897 kubelet.go:480] "Attempting to sync node with API server" May 17 00:42:11.344685 kubelet[1897]: I0517 00:42:11.344637 1897 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:42:11.345077 kubelet[1897]: I0517 00:42:11.344991 1897 kubelet.go:386] "Adding apiserver pod source" May 17 00:42:11.364769 kubelet[1897]: I0517 00:42:11.364693 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:42:11.375562 kubelet[1897]: I0517 00:42:11.375518 1897 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:42:11.376822 kubelet[1897]: I0517 00:42:11.376776 1897 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:42:11.403467 kubelet[1897]: I0517 00:42:11.403295 1897 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:42:11.410304 kubelet[1897]: I0517 00:42:11.410254 1897 server.go:1289] "Started kubelet" May 17 00:42:11.410719 kubelet[1897]: I0517 00:42:11.410667 1897 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:42:11.412072 kubelet[1897]: I0517 00:42:11.412040 1897 server.go:317] "Adding debug handlers to kubelet server" May 17 00:42:11.415466 kubelet[1897]: I0517 00:42:11.415415 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:42:11.419900 kubelet[1897]: I0517 00:42:11.419846 1897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:42:11.423173 kubelet[1897]: I0517 00:42:11.423120 1897 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:42:11.423957 kubelet[1897]: E0517 00:42:11.423914 1897 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-b5ee3a085c\" not found" May 17 00:42:11.427299 kubelet[1897]: I0517 00:42:11.425139 1897 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:42:11.428006 kubelet[1897]: I0517 00:42:11.427969 1897 reconciler.go:26] "Reconciler: start to sync state" May 17 00:42:11.434344 kubelet[1897]: I0517 00:42:11.434273 1897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:42:11.435031 kubelet[1897]: I0517 00:42:11.434992 1897 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:42:11.446270 kubelet[1897]: I0517 00:42:11.441781 1897 factory.go:223] Registration of the systemd container factory successfully May 17 00:42:11.446270 kubelet[1897]: I0517 00:42:11.441943 1897 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:42:11.452634 kubelet[1897]: E0517 00:42:11.451287 1897 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:42:11.458704 kubelet[1897]: I0517 00:42:11.458649 1897 factory.go:223] Registration of the containerd container factory successfully May 17 00:42:11.567898 kubelet[1897]: I0517 00:42:11.567858 1897 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:42:11.568266 kubelet[1897]: I0517 00:42:11.568244 1897 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:42:11.568437 kubelet[1897]: I0517 00:42:11.568422 1897 state_mem.go:36] "Initialized new in-memory state store" May 17 00:42:11.568939 kubelet[1897]: I0517 00:42:11.568906 1897 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:42:11.569155 kubelet[1897]: I0517 00:42:11.569104 1897 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:42:11.569284 kubelet[1897]: I0517 00:42:11.569268 1897 policy_none.go:49] "None policy: Start" May 17 00:42:11.569403 kubelet[1897]: I0517 00:42:11.569371 1897 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:42:11.569524 kubelet[1897]: I0517 00:42:11.569494 1897 state_mem.go:35] "Initializing new in-memory state store" May 17 00:42:11.569864 kubelet[1897]: I0517 00:42:11.569837 1897 state_mem.go:75] "Updated machine memory state" May 17 00:42:11.574150 kubelet[1897]: I0517 00:42:11.574030 1897 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:42:11.576451 kubelet[1897]: I0517 00:42:11.576388 1897 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:42:11.576451 kubelet[1897]: I0517 00:42:11.576438 1897 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:42:11.576746 kubelet[1897]: I0517 00:42:11.576478 1897 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:42:11.576746 kubelet[1897]: I0517 00:42:11.576492 1897 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:42:11.578721 kubelet[1897]: E0517 00:42:11.578647 1897 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:42:11.607110 kubelet[1897]: E0517 00:42:11.607074 1897 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:42:11.608202 kubelet[1897]: I0517 00:42:11.608169 1897 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:42:11.610435 kubelet[1897]: I0517 00:42:11.608409 1897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:42:11.616671 kubelet[1897]: I0517 00:42:11.616493 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:42:11.626783 kubelet[1897]: E0517 00:42:11.624349 1897 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:42:11.694093 kubelet[1897]: I0517 00:42:11.680520 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.694093 kubelet[1897]: I0517 00:42:11.681299 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.694093 kubelet[1897]: I0517 00:42:11.682005 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.703469 kubelet[1897]: I0517 00:42:11.703418 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:42:11.703842 kubelet[1897]: E0517 00:42:11.703816 1897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.706119 kubelet[1897]: I0517 00:42:11.706073 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:42:11.714148 kubelet[1897]: I0517 00:42:11.714091 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:42:11.714700 kubelet[1897]: E0517 00:42:11.714661 1897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-b5ee3a085c\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.726841 kubelet[1897]: I0517 00:42:11.726794 1897 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.736999 kubelet[1897]: I0517 00:42:11.736927 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.737426 kubelet[1897]: I0517 00:42:11.737388 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.737671 kubelet[1897]: I0517 00:42:11.737637 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.737836 kubelet[1897]: I0517 00:42:11.737805 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/051766354b015ea3c297fadc8887812a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-b5ee3a085c\" (UID: \"051766354b015ea3c297fadc8887812a\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.737971 kubelet[1897]: I0517 00:42:11.737950 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9fd166df5c160069c02ad747e719d04-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" (UID: \"d9fd166df5c160069c02ad747e719d04\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.738117 kubelet[1897]: I0517 00:42:11.738097 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.738277 kubelet[1897]: I0517 00:42:11.738255 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9fd166df5c160069c02ad747e719d04-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" (UID: \"d9fd166df5c160069c02ad747e719d04\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.738449 kubelet[1897]: I0517 00:42:11.738410 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9fd166df5c160069c02ad747e719d04-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" (UID: \"d9fd166df5c160069c02ad747e719d04\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.738632 kubelet[1897]: I0517 00:42:11.738611 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cde46b721823566f5cdbb90f5f694160-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-b5ee3a085c\" (UID: \"cde46b721823566f5cdbb90f5f694160\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.750679 kubelet[1897]: I0517 00:42:11.750629 1897 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:11.751199 kubelet[1897]: I0517 00:42:11.751169 1897 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-b5ee3a085c" May 17 00:42:12.006586 kubelet[1897]: E0517 00:42:12.006392 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:12.007121 kubelet[1897]: E0517 00:42:12.007048 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:12.016454 kubelet[1897]: E0517 00:42:12.016398 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:12.207260 sudo[1912]: pam_unix(sudo:session): session closed for user root May 17 00:42:12.371793 kubelet[1897]: I0517 00:42:12.371572 1897 apiserver.go:52] "Watching apiserver" May 17 00:42:12.428320 kubelet[1897]: I0517 00:42:12.428235 1897 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:42:12.627767 kubelet[1897]: E0517 00:42:12.627614 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:12.628095 kubelet[1897]: I0517 00:42:12.628074 1897 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:12.628529 kubelet[1897]: E0517 00:42:12.628482 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:12.639559 kubelet[1897]: I0517 00:42:12.639476 1897 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:42:12.639912 kubelet[1897]: E0517 00:42:12.639884 1897 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-b5ee3a085c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" May 17 00:42:12.640394 kubelet[1897]: E0517 00:42:12.640361 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:12.688861 kubelet[1897]: I0517 00:42:12.688762 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-b5ee3a085c" podStartSLOduration=3.688742674 podStartE2EDuration="3.688742674s" podCreationTimestamp="2025-05-17 00:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:12.686483151 +0000 UTC m=+1.523903198" watchObservedRunningTime="2025-05-17 00:42:12.688742674 +0000 UTC m=+1.526162720" May 17 00:42:12.722849 kubelet[1897]: I0517 00:42:12.722749 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-b5ee3a085c" podStartSLOduration=5.722722279 podStartE2EDuration="5.722722279s" podCreationTimestamp="2025-05-17 00:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:12.707309306 +0000 UTC m=+1.544729355" watchObservedRunningTime="2025-05-17 00:42:12.722722279 +0000 UTC m=+1.560142323" May 17 00:42:13.533224 kubelet[1897]: I0517 00:42:13.533145 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-b5ee3a085c" podStartSLOduration=2.5331272240000002 podStartE2EDuration="2.533127224s" podCreationTimestamp="2025-05-17 00:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:12.725219496 +0000 UTC m=+1.562639554" watchObservedRunningTime="2025-05-17 00:42:13.533127224 +0000 UTC m=+2.370547275" May 17 00:42:13.630131 kubelet[1897]: E0517 00:42:13.630077 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:13.630570 kubelet[1897]: E0517 00:42:13.630537 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:13.631290 kubelet[1897]: E0517 00:42:13.631206 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:14.456074 sudo[1295]: pam_unix(sudo:session): session closed for user root May 17 00:42:14.459860 sshd[1291]: pam_unix(sshd:session): session closed for user core May 17 00:42:14.463773 systemd[1]: sshd@4-64.23.137.34:22-147.75.109.163:47618.service: Deactivated successfully. May 17 00:42:14.464727 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:42:14.464892 systemd[1]: session-5.scope: Consumed 6.563s CPU time. May 17 00:42:14.466219 systemd-logind[1179]: Session 5 logged out. Waiting for processes to exit. May 17 00:42:14.467455 systemd-logind[1179]: Removed session 5. May 17 00:42:14.632473 kubelet[1897]: E0517 00:42:14.632387 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:14.634737 kubelet[1897]: E0517 00:42:14.634698 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:15.634877 kubelet[1897]: E0517 00:42:15.634787 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:15.687448 kubelet[1897]: I0517 00:42:15.687408 1897 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:42:15.688023 env[1190]: time="2025-05-17T00:42:15.687968851Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:42:15.688456 kubelet[1897]: I0517 00:42:15.688306 1897 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:42:16.455008 systemd[1]: Created slice kubepods-besteffort-pod55bbf19a_e059_44f6_a3bd_68e00bc1ce1b.slice. May 17 00:42:16.458986 kubelet[1897]: I0517 00:42:16.458895 1897 status_manager.go:895] "Failed to get status for pod" podUID="55bbf19a-e059-44f6-a3bd-68e00bc1ce1b" pod="kube-system/kube-proxy-bshx8" err="pods \"kube-proxy-bshx8\" is forbidden: User \"system:node:ci-3510.3.7-n-b5ee3a085c\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-b5ee3a085c' and this object" May 17 00:42:16.459312 kubelet[1897]: E0517 00:42:16.459271 1897 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510.3.7-n-b5ee3a085c\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-b5ee3a085c' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" May 17 00:42:16.459709 kubelet[1897]: E0517 00:42:16.459679 1897 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.7-n-b5ee3a085c\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.7-n-b5ee3a085c' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" May 17 00:42:16.476412 systemd[1]: Created slice kubepods-burstable-pod005ae1be_c53d_4f26_8325_4828c561090f.slice. May 17 00:42:16.480444 kubelet[1897]: I0517 00:42:16.480382 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55bbf19a-e059-44f6-a3bd-68e00bc1ce1b-kube-proxy\") pod \"kube-proxy-bshx8\" (UID: \"55bbf19a-e059-44f6-a3bd-68e00bc1ce1b\") " pod="kube-system/kube-proxy-bshx8" May 17 00:42:16.482809 kubelet[1897]: I0517 00:42:16.482074 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55bbf19a-e059-44f6-a3bd-68e00bc1ce1b-xtables-lock\") pod \"kube-proxy-bshx8\" (UID: \"55bbf19a-e059-44f6-a3bd-68e00bc1ce1b\") " pod="kube-system/kube-proxy-bshx8" May 17 00:42:16.482809 kubelet[1897]: I0517 00:42:16.482161 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55bbf19a-e059-44f6-a3bd-68e00bc1ce1b-lib-modules\") pod \"kube-proxy-bshx8\" (UID: \"55bbf19a-e059-44f6-a3bd-68e00bc1ce1b\") " pod="kube-system/kube-proxy-bshx8" May 17 00:42:16.482809 kubelet[1897]: I0517 00:42:16.482193 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfg4\" (UniqueName: \"kubernetes.io/projected/55bbf19a-e059-44f6-a3bd-68e00bc1ce1b-kube-api-access-gvfg4\") pod \"kube-proxy-bshx8\" (UID: \"55bbf19a-e059-44f6-a3bd-68e00bc1ce1b\") " pod="kube-system/kube-proxy-bshx8" May 17 00:42:16.583073 kubelet[1897]: I0517 00:42:16.582971 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-bpf-maps\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583073 kubelet[1897]: I0517 00:42:16.583060 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-cgroup\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583427 kubelet[1897]: I0517 00:42:16.583095 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ht7\" (UniqueName: \"kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-kube-api-access-94ht7\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583427 kubelet[1897]: I0517 00:42:16.583127 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-run\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583427 kubelet[1897]: I0517 00:42:16.583313 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-hostproc\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583427 kubelet[1897]: I0517 00:42:16.583348 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-xtables-lock\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583427 kubelet[1897]: I0517 00:42:16.583373 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/005ae1be-c53d-4f26-8325-4828c561090f-clustermesh-secrets\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583427 kubelet[1897]: I0517 00:42:16.583399 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-net\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583706 kubelet[1897]: I0517 00:42:16.583444 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-lib-modules\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583748 kubelet[1897]: I0517 00:42:16.583708 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cni-path\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583748 kubelet[1897]: I0517 00:42:16.583735 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-etc-cni-netd\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583816 kubelet[1897]: I0517 00:42:16.583771 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005ae1be-c53d-4f26-8325-4828c561090f-cilium-config-path\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583816 kubelet[1897]: I0517 00:42:16.583797 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-kernel\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.583898 kubelet[1897]: I0517 00:42:16.583821 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-hubble-tls\") pod \"cilium-v7rhj\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " pod="kube-system/cilium-v7rhj" May 17 00:42:16.686542 kubelet[1897]: I0517 00:42:16.686420 1897 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:42:16.867908 systemd[1]: Created slice kubepods-besteffort-pod0d1300f0_3cfc_4b28_9463_e44841136d21.slice. May 17 00:42:16.986499 kubelet[1897]: I0517 00:42:16.986386 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d1300f0-3cfc-4b28-9463-e44841136d21-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cvd2q\" (UID: \"0d1300f0-3cfc-4b28-9463-e44841136d21\") " pod="kube-system/cilium-operator-6c4d7847fc-cvd2q" May 17 00:42:16.987005 kubelet[1897]: I0517 00:42:16.986906 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlghb\" (UniqueName: \"kubernetes.io/projected/0d1300f0-3cfc-4b28-9463-e44841136d21-kube-api-access-rlghb\") pod \"cilium-operator-6c4d7847fc-cvd2q\" (UID: \"0d1300f0-3cfc-4b28-9463-e44841136d21\") " pod="kube-system/cilium-operator-6c4d7847fc-cvd2q" May 17 00:42:17.664203 kubelet[1897]: E0517 00:42:17.664135 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:17.666758 env[1190]: time="2025-05-17T00:42:17.666375289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bshx8,Uid:55bbf19a-e059-44f6-a3bd-68e00bc1ce1b,Namespace:kube-system,Attempt:0,}" May 17 00:42:17.687201 kubelet[1897]: E0517 00:42:17.687129 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:17.691186 env[1190]: time="2025-05-17T00:42:17.691111692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7rhj,Uid:005ae1be-c53d-4f26-8325-4828c561090f,Namespace:kube-system,Attempt:0,}" May 17 00:42:17.724171 env[1190]: time="2025-05-17T00:42:17.717204062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:17.724171 env[1190]: time="2025-05-17T00:42:17.717275924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:17.724171 env[1190]: time="2025-05-17T00:42:17.717294498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:17.724171 env[1190]: time="2025-05-17T00:42:17.717473097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f44cef33010cd205af24cf7d2cf928e6650ffa82fcd8bc98ea44acd3fc9a6ec pid=1983 runtime=io.containerd.runc.v2 May 17 00:42:17.743950 env[1190]: time="2025-05-17T00:42:17.743825823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:17.743950 env[1190]: time="2025-05-17T00:42:17.743893264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:17.744365 env[1190]: time="2025-05-17T00:42:17.743909284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:17.744365 env[1190]: time="2025-05-17T00:42:17.744177981Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79 pid=2001 runtime=io.containerd.runc.v2 May 17 00:42:17.765743 systemd[1]: Started cri-containerd-4f44cef33010cd205af24cf7d2cf928e6650ffa82fcd8bc98ea44acd3fc9a6ec.scope. May 17 00:42:17.772774 kubelet[1897]: E0517 00:42:17.772665 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:17.775411 env[1190]: time="2025-05-17T00:42:17.773578972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cvd2q,Uid:0d1300f0-3cfc-4b28-9463-e44841136d21,Namespace:kube-system,Attempt:0,}" May 17 00:42:17.789799 systemd[1]: Started cri-containerd-1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79.scope. May 17 00:42:17.808558 env[1190]: time="2025-05-17T00:42:17.804744498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:17.808558 env[1190]: time="2025-05-17T00:42:17.804812959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:17.808558 env[1190]: time="2025-05-17T00:42:17.804828472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:17.808558 env[1190]: time="2025-05-17T00:42:17.805109964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140 pid=2034 runtime=io.containerd.runc.v2 May 17 00:42:17.837744 systemd[1]: Started cri-containerd-ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140.scope. May 17 00:42:17.853136 env[1190]: time="2025-05-17T00:42:17.853055007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7rhj,Uid:005ae1be-c53d-4f26-8325-4828c561090f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\"" May 17 00:42:17.855279 kubelet[1897]: E0517 00:42:17.854710 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:17.858748 env[1190]: time="2025-05-17T00:42:17.858694761Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:42:17.924946 env[1190]: time="2025-05-17T00:42:17.923544689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bshx8,Uid:55bbf19a-e059-44f6-a3bd-68e00bc1ce1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f44cef33010cd205af24cf7d2cf928e6650ffa82fcd8bc98ea44acd3fc9a6ec\"" May 17 00:42:17.926669 kubelet[1897]: E0517 00:42:17.926277 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:17.931565 env[1190]: time="2025-05-17T00:42:17.931465815Z" level=info msg="CreateContainer within sandbox \"4f44cef33010cd205af24cf7d2cf928e6650ffa82fcd8bc98ea44acd3fc9a6ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:42:17.951183 env[1190]: time="2025-05-17T00:42:17.951104502Z" level=info msg="CreateContainer within sandbox \"4f44cef33010cd205af24cf7d2cf928e6650ffa82fcd8bc98ea44acd3fc9a6ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c50a67b967a9506c0a08bd669f528b9a7863eb18b7a3c5dd693266083af72e35\"" May 17 00:42:17.953581 env[1190]: time="2025-05-17T00:42:17.952150596Z" level=info msg="StartContainer for \"c50a67b967a9506c0a08bd669f528b9a7863eb18b7a3c5dd693266083af72e35\"" May 17 00:42:17.960029 env[1190]: time="2025-05-17T00:42:17.959978589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cvd2q,Uid:0d1300f0-3cfc-4b28-9463-e44841136d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\"" May 17 00:42:17.960846 kubelet[1897]: E0517 00:42:17.960819 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:17.985455 systemd[1]: Started cri-containerd-c50a67b967a9506c0a08bd669f528b9a7863eb18b7a3c5dd693266083af72e35.scope. May 17 00:42:18.035730 env[1190]: time="2025-05-17T00:42:18.035604667Z" level=info msg="StartContainer for \"c50a67b967a9506c0a08bd669f528b9a7863eb18b7a3c5dd693266083af72e35\" returns successfully" May 17 00:42:18.642989 kubelet[1897]: E0517 00:42:18.642929 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:19.204558 kubelet[1897]: E0517 00:42:19.203891 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:19.239729 kubelet[1897]: I0517 00:42:19.239651 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bshx8" podStartSLOduration=3.239631244 podStartE2EDuration="3.239631244s" podCreationTimestamp="2025-05-17 00:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:18.659742726 +0000 UTC m=+7.497162777" watchObservedRunningTime="2025-05-17 00:42:19.239631244 +0000 UTC m=+8.077051326" May 17 00:42:19.652823 kubelet[1897]: E0517 00:42:19.652363 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:20.661414 kubelet[1897]: E0517 00:42:20.660285 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:23.397158 update_engine[1180]: I0517 00:42:23.397047 1180 update_attempter.cc:509] Updating boot flags... May 17 00:42:24.004253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369583762.mount: Deactivated successfully. May 17 00:42:27.567819 env[1190]: time="2025-05-17T00:42:27.567724823Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:27.570793 env[1190]: time="2025-05-17T00:42:27.570733794Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:27.573239 env[1190]: time="2025-05-17T00:42:27.573171761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:27.574160 env[1190]: time="2025-05-17T00:42:27.574105466Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:42:27.580308 env[1190]: time="2025-05-17T00:42:27.580244685Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:42:27.591250 env[1190]: time="2025-05-17T00:42:27.591196162Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:42:27.604086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155627678.mount: Deactivated successfully. May 17 00:42:27.612668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092333255.mount: Deactivated successfully. May 17 00:42:27.618297 env[1190]: time="2025-05-17T00:42:27.618219384Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4\"" May 17 00:42:27.621936 env[1190]: time="2025-05-17T00:42:27.621890590Z" level=info msg="StartContainer for \"f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4\"" May 17 00:42:27.647399 systemd[1]: Started cri-containerd-f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4.scope. May 17 00:42:27.704402 env[1190]: time="2025-05-17T00:42:27.704343386Z" level=info msg="StartContainer for \"f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4\" returns successfully" May 17 00:42:27.720305 systemd[1]: cri-containerd-f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4.scope: Deactivated successfully. May 17 00:42:27.753669 env[1190]: time="2025-05-17T00:42:27.753579263Z" level=info msg="shim disconnected" id=f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4 May 17 00:42:27.753669 env[1190]: time="2025-05-17T00:42:27.753662303Z" level=warning msg="cleaning up after shim disconnected" id=f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4 namespace=k8s.io May 17 00:42:27.753669 env[1190]: time="2025-05-17T00:42:27.753680366Z" level=info msg="cleaning up dead shim" May 17 00:42:27.765818 env[1190]: time="2025-05-17T00:42:27.765746290Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2326 runtime=io.containerd.runc.v2\n" May 17 00:42:28.602773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4-rootfs.mount: Deactivated successfully. May 17 00:42:28.688470 kubelet[1897]: E0517 00:42:28.687235 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:28.694268 env[1190]: time="2025-05-17T00:42:28.694152839Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:42:28.714579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215082633.mount: Deactivated successfully. May 17 00:42:28.725337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228255461.mount: Deactivated successfully. May 17 00:42:28.739824 env[1190]: time="2025-05-17T00:42:28.739761252Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95\"" May 17 00:42:28.741559 env[1190]: time="2025-05-17T00:42:28.740799625Z" level=info msg="StartContainer for \"84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95\"" May 17 00:42:28.773033 systemd[1]: Started cri-containerd-84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95.scope. May 17 00:42:28.835545 env[1190]: time="2025-05-17T00:42:28.835454257Z" level=info msg="StartContainer for \"84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95\" returns successfully" May 17 00:42:28.845958 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:42:28.846569 systemd[1]: Stopped systemd-sysctl.service. May 17 00:42:28.846846 systemd[1]: Stopping systemd-sysctl.service... May 17 00:42:28.850643 systemd[1]: Starting systemd-sysctl.service... May 17 00:42:28.859462 systemd[1]: cri-containerd-84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95.scope: Deactivated successfully. May 17 00:42:28.871834 systemd[1]: Finished systemd-sysctl.service. May 17 00:42:28.899932 env[1190]: time="2025-05-17T00:42:28.899841719Z" level=info msg="shim disconnected" id=84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95 May 17 00:42:28.899932 env[1190]: time="2025-05-17T00:42:28.899930944Z" level=warning msg="cleaning up after shim disconnected" id=84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95 namespace=k8s.io May 17 00:42:28.899932 env[1190]: time="2025-05-17T00:42:28.899949371Z" level=info msg="cleaning up dead shim" May 17 00:42:28.915677 env[1190]: time="2025-05-17T00:42:28.915607586Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2389 runtime=io.containerd.runc.v2\n" May 17 00:42:29.696150 kubelet[1897]: E0517 00:42:29.696089 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:29.708609 env[1190]: time="2025-05-17T00:42:29.708539588Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:42:29.753559 env[1190]: time="2025-05-17T00:42:29.751799718Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998\"" May 17 00:42:29.753559 env[1190]: time="2025-05-17T00:42:29.753247996Z" level=info msg="StartContainer for \"9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998\"" May 17 00:42:29.752334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605004257.mount: Deactivated successfully. May 17 00:42:29.838777 systemd[1]: Started cri-containerd-9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998.scope. May 17 00:42:30.012420 env[1190]: time="2025-05-17T00:42:30.012225614Z" level=info msg="StartContainer for \"9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998\" returns successfully" May 17 00:42:30.023563 systemd[1]: cri-containerd-9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998.scope: Deactivated successfully. May 17 00:42:30.072639 env[1190]: time="2025-05-17T00:42:30.072360864Z" level=info msg="shim disconnected" id=9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998 May 17 00:42:30.073474 env[1190]: time="2025-05-17T00:42:30.073418341Z" level=warning msg="cleaning up after shim disconnected" id=9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998 namespace=k8s.io May 17 00:42:30.073716 env[1190]: time="2025-05-17T00:42:30.073574543Z" level=info msg="cleaning up dead shim" May 17 00:42:30.100138 env[1190]: time="2025-05-17T00:42:30.100085889Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2446 runtime=io.containerd.runc.v2\n" May 17 00:42:30.352167 env[1190]: time="2025-05-17T00:42:30.350499392Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:30.354777 env[1190]: time="2025-05-17T00:42:30.354631261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:30.357114 env[1190]: time="2025-05-17T00:42:30.357044467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:42:30.357748 env[1190]: time="2025-05-17T00:42:30.357687299Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:42:30.369200 env[1190]: time="2025-05-17T00:42:30.369124832Z" level=info msg="CreateContainer within sandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:42:30.385426 env[1190]: time="2025-05-17T00:42:30.385331911Z" level=info msg="CreateContainer within sandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\"" May 17 00:42:30.386737 env[1190]: time="2025-05-17T00:42:30.386695513Z" level=info msg="StartContainer for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\"" May 17 00:42:30.415820 systemd[1]: Started cri-containerd-51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630.scope. May 17 00:42:30.468886 env[1190]: time="2025-05-17T00:42:30.468818230Z" level=info msg="StartContainer for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" returns successfully" May 17 00:42:30.605247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998-rootfs.mount: Deactivated successfully. May 17 00:42:30.697153 kubelet[1897]: E0517 00:42:30.697109 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:30.708545 env[1190]: time="2025-05-17T00:42:30.708480005Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:42:30.708815 kubelet[1897]: E0517 00:42:30.708726 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:30.731434 env[1190]: time="2025-05-17T00:42:30.731208453Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949\"" May 17 00:42:30.732134 env[1190]: time="2025-05-17T00:42:30.732095162Z" level=info msg="StartContainer for \"9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949\"" May 17 00:42:30.801919 systemd[1]: Started cri-containerd-9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949.scope. May 17 00:42:30.855675 kubelet[1897]: I0517 00:42:30.855456 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cvd2q" podStartSLOduration=2.459313699 podStartE2EDuration="14.855427913s" podCreationTimestamp="2025-05-17 00:42:16 +0000 UTC" firstStartedPulling="2025-05-17 00:42:17.96349216 +0000 UTC m=+6.800912191" lastFinishedPulling="2025-05-17 00:42:30.359606361 +0000 UTC m=+19.197026405" observedRunningTime="2025-05-17 00:42:30.852259819 +0000 UTC m=+19.689679870" watchObservedRunningTime="2025-05-17 00:42:30.855427913 +0000 UTC m=+19.692847963" May 17 00:42:30.922499 env[1190]: time="2025-05-17T00:42:30.922423986Z" level=info msg="StartContainer for \"9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949\" returns successfully" May 17 00:42:30.930234 systemd[1]: cri-containerd-9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949.scope: Deactivated successfully. May 17 00:42:30.981096 env[1190]: time="2025-05-17T00:42:30.981029775Z" level=info msg="shim disconnected" id=9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949 May 17 00:42:30.981096 env[1190]: time="2025-05-17T00:42:30.981084220Z" level=warning msg="cleaning up after shim disconnected" id=9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949 namespace=k8s.io May 17 00:42:30.981096 env[1190]: time="2025-05-17T00:42:30.981094314Z" level=info msg="cleaning up dead shim" May 17 00:42:31.000671 env[1190]: time="2025-05-17T00:42:31.000570347Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:42:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2544 runtime=io.containerd.runc.v2\n" May 17 00:42:31.604330 systemd[1]: run-containerd-runc-k8s.io-9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949-runc.LoV9qG.mount: Deactivated successfully. May 17 00:42:31.604974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949-rootfs.mount: Deactivated successfully. May 17 00:42:31.733218 kubelet[1897]: E0517 00:42:31.733148 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:31.733908 kubelet[1897]: E0517 00:42:31.733875 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:31.755332 env[1190]: time="2025-05-17T00:42:31.755056416Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:42:31.792717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3499300844.mount: Deactivated successfully. May 17 00:42:31.805948 env[1190]: time="2025-05-17T00:42:31.805852274Z" level=info msg="CreateContainer within sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\"" May 17 00:42:31.808563 env[1190]: time="2025-05-17T00:42:31.808479171Z" level=info msg="StartContainer for \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\"" May 17 00:42:31.842062 systemd[1]: Started cri-containerd-0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6.scope. May 17 00:42:31.893346 env[1190]: time="2025-05-17T00:42:31.893160530Z" level=info msg="StartContainer for \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\" returns successfully" May 17 00:42:32.054129 kubelet[1897]: I0517 00:42:32.053419 1897 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:42:32.130177 systemd[1]: Created slice kubepods-burstable-pode2008adb_9f26_4621_9a9b_122d49c04850.slice. May 17 00:42:32.147410 systemd[1]: Created slice kubepods-burstable-podc6c0acbd_b1a8_4b93_b86d_62c4ce37e26e.slice. May 17 00:42:32.213925 kubelet[1897]: I0517 00:42:32.213784 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2008adb-9f26-4621-9a9b-122d49c04850-config-volume\") pod \"coredns-674b8bbfcf-h95nk\" (UID: \"e2008adb-9f26-4621-9a9b-122d49c04850\") " pod="kube-system/coredns-674b8bbfcf-h95nk" May 17 00:42:32.213925 kubelet[1897]: I0517 00:42:32.213915 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sc96\" (UniqueName: \"kubernetes.io/projected/e2008adb-9f26-4621-9a9b-122d49c04850-kube-api-access-4sc96\") pod \"coredns-674b8bbfcf-h95nk\" (UID: \"e2008adb-9f26-4621-9a9b-122d49c04850\") " pod="kube-system/coredns-674b8bbfcf-h95nk" May 17 00:42:32.315456 kubelet[1897]: I0517 00:42:32.315310 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6c0acbd-b1a8-4b93-b86d-62c4ce37e26e-config-volume\") pod \"coredns-674b8bbfcf-5h7p6\" (UID: \"c6c0acbd-b1a8-4b93-b86d-62c4ce37e26e\") " pod="kube-system/coredns-674b8bbfcf-5h7p6" May 17 00:42:32.315456 kubelet[1897]: I0517 00:42:32.315461 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs89b\" (UniqueName: \"kubernetes.io/projected/c6c0acbd-b1a8-4b93-b86d-62c4ce37e26e-kube-api-access-vs89b\") pod \"coredns-674b8bbfcf-5h7p6\" (UID: \"c6c0acbd-b1a8-4b93-b86d-62c4ce37e26e\") " pod="kube-system/coredns-674b8bbfcf-5h7p6" May 17 00:42:32.439712 kubelet[1897]: E0517 00:42:32.439659 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:32.441223 env[1190]: time="2025-05-17T00:42:32.440779734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h95nk,Uid:e2008adb-9f26-4621-9a9b-122d49c04850,Namespace:kube-system,Attempt:0,}" May 17 00:42:32.454340 kubelet[1897]: E0517 00:42:32.454300 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:32.457546 env[1190]: time="2025-05-17T00:42:32.456774971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h7p6,Uid:c6c0acbd-b1a8-4b93-b86d-62c4ce37e26e,Namespace:kube-system,Attempt:0,}" May 17 00:42:32.740348 kubelet[1897]: E0517 00:42:32.739373 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:32.777711 kubelet[1897]: I0517 00:42:32.777088 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v7rhj" podStartSLOduration=7.058586903 podStartE2EDuration="16.777028227s" podCreationTimestamp="2025-05-17 00:42:16 +0000 UTC" firstStartedPulling="2025-05-17 00:42:17.858249289 +0000 UTC m=+6.695669318" lastFinishedPulling="2025-05-17 00:42:27.576690606 +0000 UTC m=+16.414110642" observedRunningTime="2025-05-17 00:42:32.777043913 +0000 UTC m=+21.614463956" watchObservedRunningTime="2025-05-17 00:42:32.777028227 +0000 UTC m=+21.614448278" May 17 00:42:33.742005 kubelet[1897]: E0517 00:42:33.741964 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:34.526021 systemd-networkd[1010]: cilium_host: Link UP May 17 00:42:34.534941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:42:34.535116 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:42:34.533343 systemd-networkd[1010]: cilium_net: Link UP May 17 00:42:34.534014 systemd-networkd[1010]: cilium_net: Gained carrier May 17 00:42:34.534834 systemd-networkd[1010]: cilium_host: Gained carrier May 17 00:42:34.711327 systemd-networkd[1010]: cilium_vxlan: Link UP May 17 00:42:34.711339 systemd-networkd[1010]: cilium_vxlan: Gained carrier May 17 00:42:34.744256 kubelet[1897]: E0517 00:42:34.743813 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:34.869847 systemd-networkd[1010]: cilium_host: Gained IPv6LL May 17 00:42:34.941780 systemd-networkd[1010]: cilium_net: Gained IPv6LL May 17 00:42:35.147546 kernel: NET: Registered PF_ALG protocol family May 17 00:42:36.104562 systemd-networkd[1010]: lxc_health: Link UP May 17 00:42:36.128147 systemd-networkd[1010]: lxc_health: Gained carrier May 17 00:42:36.132563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:42:36.513341 systemd-networkd[1010]: lxc4b8ef409a597: Link UP May 17 00:42:36.521628 kernel: eth0: renamed from tmpb0766 May 17 00:42:36.527109 systemd-networkd[1010]: lxc4b8ef409a597: Gained carrier May 17 00:42:36.528053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4b8ef409a597: link becomes ready May 17 00:42:36.537729 systemd-networkd[1010]: lxc68b7b93643b4: Link UP May 17 00:42:36.543557 kernel: eth0: renamed from tmp46977 May 17 00:42:36.547019 systemd-networkd[1010]: lxc68b7b93643b4: Gained carrier May 17 00:42:36.547742 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc68b7b93643b4: link becomes ready May 17 00:42:36.670075 systemd-networkd[1010]: cilium_vxlan: Gained IPv6LL May 17 00:42:37.685839 systemd-networkd[1010]: lxc4b8ef409a597: Gained IPv6LL May 17 00:42:37.699542 kubelet[1897]: E0517 00:42:37.699470 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:37.753441 kubelet[1897]: E0517 00:42:37.753361 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:37.941730 systemd-networkd[1010]: lxc_health: Gained IPv6LL May 17 00:42:38.581733 systemd-networkd[1010]: lxc68b7b93643b4: Gained IPv6LL May 17 00:42:38.755133 kubelet[1897]: E0517 00:42:38.755091 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:41.697798 env[1190]: time="2025-05-17T00:42:41.697685978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:41.698479 env[1190]: time="2025-05-17T00:42:41.698341968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:41.698655 env[1190]: time="2025-05-17T00:42:41.698626310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:41.698966 env[1190]: time="2025-05-17T00:42:41.698922954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b076646630a9e18dbd73d5ef32f318a99204fde393123a94b583734ecbb40e04 pid=3102 runtime=io.containerd.runc.v2 May 17 00:42:41.725177 systemd[1]: Started cri-containerd-b076646630a9e18dbd73d5ef32f318a99204fde393123a94b583734ecbb40e04.scope. May 17 00:42:41.742231 systemd[1]: run-containerd-runc-k8s.io-b076646630a9e18dbd73d5ef32f318a99204fde393123a94b583734ecbb40e04-runc.rv94KF.mount: Deactivated successfully. May 17 00:42:41.885920 env[1190]: time="2025-05-17T00:42:41.885868422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h95nk,Uid:e2008adb-9f26-4621-9a9b-122d49c04850,Namespace:kube-system,Attempt:0,} returns sandbox id \"b076646630a9e18dbd73d5ef32f318a99204fde393123a94b583734ecbb40e04\"" May 17 00:42:41.887852 kubelet[1897]: E0517 00:42:41.887806 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:41.894747 env[1190]: time="2025-05-17T00:42:41.894692379Z" level=info msg="CreateContainer within sandbox \"b076646630a9e18dbd73d5ef32f318a99204fde393123a94b583734ecbb40e04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:41.917125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749168657.mount: Deactivated successfully. May 17 00:42:41.925049 env[1190]: time="2025-05-17T00:42:41.924963432Z" level=info msg="CreateContainer within sandbox \"b076646630a9e18dbd73d5ef32f318a99204fde393123a94b583734ecbb40e04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1e03463516cf24df4e531ee146e38dc4ef010936739f7db616a8bd3b170a363\"" May 17 00:42:41.926324 env[1190]: time="2025-05-17T00:42:41.926113339Z" level=info msg="StartContainer for \"d1e03463516cf24df4e531ee146e38dc4ef010936739f7db616a8bd3b170a363\"" May 17 00:42:41.936754 env[1190]: time="2025-05-17T00:42:41.936610423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:42:41.936999 env[1190]: time="2025-05-17T00:42:41.936772715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:42:41.936999 env[1190]: time="2025-05-17T00:42:41.936823208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:42:41.937203 env[1190]: time="2025-05-17T00:42:41.937062984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46977a24488eee8c6513961675afd6e38b945b3321f5eeab942b3179734a0885 pid=3144 runtime=io.containerd.runc.v2 May 17 00:42:41.973783 systemd[1]: Started cri-containerd-d1e03463516cf24df4e531ee146e38dc4ef010936739f7db616a8bd3b170a363.scope. May 17 00:42:41.987077 systemd[1]: Started cri-containerd-46977a24488eee8c6513961675afd6e38b945b3321f5eeab942b3179734a0885.scope. May 17 00:42:42.060452 env[1190]: time="2025-05-17T00:42:42.060394918Z" level=info msg="StartContainer for \"d1e03463516cf24df4e531ee146e38dc4ef010936739f7db616a8bd3b170a363\" returns successfully" May 17 00:42:42.080018 env[1190]: time="2025-05-17T00:42:42.079952692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h7p6,Uid:c6c0acbd-b1a8-4b93-b86d-62c4ce37e26e,Namespace:kube-system,Attempt:0,} returns sandbox id \"46977a24488eee8c6513961675afd6e38b945b3321f5eeab942b3179734a0885\"" May 17 00:42:42.082101 kubelet[1897]: E0517 00:42:42.082057 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:42.088554 env[1190]: time="2025-05-17T00:42:42.088388739Z" level=info msg="CreateContainer within sandbox \"46977a24488eee8c6513961675afd6e38b945b3321f5eeab942b3179734a0885\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:42:42.117920 env[1190]: time="2025-05-17T00:42:42.117817947Z" level=info msg="CreateContainer within sandbox \"46977a24488eee8c6513961675afd6e38b945b3321f5eeab942b3179734a0885\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a13b45ef84eef3ae596c125e9240f3edb2021f7a07521f9ddaa1885dfedfc03\"" May 17 00:42:42.120925 env[1190]: time="2025-05-17T00:42:42.120826015Z" level=info msg="StartContainer for \"5a13b45ef84eef3ae596c125e9240f3edb2021f7a07521f9ddaa1885dfedfc03\"" May 17 00:42:42.150220 systemd[1]: Started cri-containerd-5a13b45ef84eef3ae596c125e9240f3edb2021f7a07521f9ddaa1885dfedfc03.scope. May 17 00:42:42.205546 env[1190]: time="2025-05-17T00:42:42.205425369Z" level=info msg="StartContainer for \"5a13b45ef84eef3ae596c125e9240f3edb2021f7a07521f9ddaa1885dfedfc03\" returns successfully" May 17 00:42:42.779922 kubelet[1897]: E0517 00:42:42.779846 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:42.784385 kubelet[1897]: E0517 00:42:42.784340 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:42.802633 kubelet[1897]: I0517 00:42:42.802555 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5h7p6" podStartSLOduration=26.802482111 podStartE2EDuration="26.802482111s" podCreationTimestamp="2025-05-17 00:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:42.800849614 +0000 UTC m=+31.638269657" watchObservedRunningTime="2025-05-17 00:42:42.802482111 +0000 UTC m=+31.639902163" May 17 00:42:43.787349 kubelet[1897]: E0517 00:42:43.787297 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:43.788674 kubelet[1897]: E0517 00:42:43.788637 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:44.790012 kubelet[1897]: E0517 00:42:44.789971 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:44.790573 kubelet[1897]: E0517 00:42:44.790153 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:42:46.862408 systemd[1]: Started sshd@5-64.23.137.34:22-218.92.0.133:26065.service. May 17 00:42:48.045777 sshd[3259]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.133 user=root May 17 00:42:50.663235 sshd[3259]: Failed password for root from 218.92.0.133 port 26065 ssh2 May 17 00:42:52.726646 systemd[1]: Started sshd@6-64.23.137.34:22-147.75.109.163:35484.service. May 17 00:42:52.780332 sshd[3265]: Accepted publickey for core from 147.75.109.163 port 35484 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:42:52.784838 sshd[3265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:52.797722 systemd[1]: Started session-6.scope. May 17 00:42:52.797723 systemd-logind[1179]: New session 6 of user core. May 17 00:42:53.083128 sshd[3265]: pam_unix(sshd:session): session closed for user core May 17 00:42:53.089586 systemd-logind[1179]: Session 6 logged out. Waiting for processes to exit. May 17 00:42:53.089969 systemd[1]: sshd@6-64.23.137.34:22-147.75.109.163:35484.service: Deactivated successfully. May 17 00:42:53.091082 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:42:53.093664 systemd-logind[1179]: Removed session 6. May 17 00:42:53.519194 sshd[3259]: Failed password for root from 218.92.0.133 port 26065 ssh2 May 17 00:42:55.773065 sshd[3259]: Failed password for root from 218.92.0.133 port 26065 ssh2 May 17 00:42:57.577907 sshd[3259]: Received disconnect from 218.92.0.133 port 26065:11: [preauth] May 17 00:42:57.577907 sshd[3259]: Disconnected from authenticating user root 218.92.0.133 port 26065 [preauth] May 17 00:42:57.578568 sshd[3259]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.133 user=root May 17 00:42:57.580837 systemd[1]: sshd@5-64.23.137.34:22-218.92.0.133:26065.service: Deactivated successfully. May 17 00:42:58.092384 systemd[1]: Started sshd@7-64.23.137.34:22-147.75.109.163:59896.service. May 17 00:42:58.148272 sshd[3279]: Accepted publickey for core from 147.75.109.163 port 59896 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:42:58.151842 sshd[3279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:42:58.161280 systemd[1]: Started session-7.scope. May 17 00:42:58.162301 systemd-logind[1179]: New session 7 of user core. May 17 00:42:58.324147 sshd[3279]: pam_unix(sshd:session): session closed for user core May 17 00:42:58.328095 systemd[1]: sshd@7-64.23.137.34:22-147.75.109.163:59896.service: Deactivated successfully. May 17 00:42:58.329059 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:42:58.331684 systemd-logind[1179]: Session 7 logged out. Waiting for processes to exit. May 17 00:42:58.333647 systemd-logind[1179]: Removed session 7. May 17 00:43:03.334911 systemd[1]: Started sshd@8-64.23.137.34:22-147.75.109.163:59908.service. May 17 00:43:03.383412 sshd[3292]: Accepted publickey for core from 147.75.109.163 port 59908 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:03.386731 sshd[3292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:03.394287 systemd[1]: Started session-8.scope. May 17 00:43:03.395183 systemd-logind[1179]: New session 8 of user core. May 17 00:43:03.560138 sshd[3292]: pam_unix(sshd:session): session closed for user core May 17 00:43:03.564771 systemd[1]: sshd@8-64.23.137.34:22-147.75.109.163:59908.service: Deactivated successfully. May 17 00:43:03.565794 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:43:03.567605 systemd-logind[1179]: Session 8 logged out. Waiting for processes to exit. May 17 00:43:03.569466 systemd-logind[1179]: Removed session 8. May 17 00:43:08.570981 systemd[1]: Started sshd@9-64.23.137.34:22-147.75.109.163:47706.service. May 17 00:43:08.629956 sshd[3305]: Accepted publickey for core from 147.75.109.163 port 47706 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:08.632893 sshd[3305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:08.638398 systemd-logind[1179]: New session 9 of user core. May 17 00:43:08.639358 systemd[1]: Started session-9.scope. May 17 00:43:08.795582 sshd[3305]: pam_unix(sshd:session): session closed for user core May 17 00:43:08.799474 systemd-logind[1179]: Session 9 logged out. Waiting for processes to exit. May 17 00:43:08.799879 systemd[1]: sshd@9-64.23.137.34:22-147.75.109.163:47706.service: Deactivated successfully. May 17 00:43:08.800768 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:43:08.802336 systemd-logind[1179]: Removed session 9. May 17 00:43:13.803872 systemd[1]: Started sshd@10-64.23.137.34:22-147.75.109.163:47708.service. May 17 00:43:13.859436 sshd[3320]: Accepted publickey for core from 147.75.109.163 port 47708 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:13.861728 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:13.868181 systemd-logind[1179]: New session 10 of user core. May 17 00:43:13.868706 systemd[1]: Started session-10.scope. May 17 00:43:14.025645 sshd[3320]: pam_unix(sshd:session): session closed for user core May 17 00:43:14.032784 systemd[1]: sshd@10-64.23.137.34:22-147.75.109.163:47708.service: Deactivated successfully. May 17 00:43:14.034315 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:43:14.035557 systemd-logind[1179]: Session 10 logged out. Waiting for processes to exit. May 17 00:43:14.041135 systemd[1]: Started sshd@11-64.23.137.34:22-147.75.109.163:47710.service. May 17 00:43:14.045219 systemd-logind[1179]: Removed session 10. May 17 00:43:14.093077 sshd[3332]: Accepted publickey for core from 147.75.109.163 port 47710 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:14.094623 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:14.101206 systemd-logind[1179]: New session 11 of user core. May 17 00:43:14.101809 systemd[1]: Started session-11.scope. May 17 00:43:14.369195 sshd[3332]: pam_unix(sshd:session): session closed for user core May 17 00:43:14.379794 systemd[1]: Started sshd@12-64.23.137.34:22-147.75.109.163:47720.service. May 17 00:43:14.384060 systemd[1]: sshd@11-64.23.137.34:22-147.75.109.163:47710.service: Deactivated successfully. May 17 00:43:14.385675 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:43:14.387095 systemd-logind[1179]: Session 11 logged out. Waiting for processes to exit. May 17 00:43:14.390099 systemd-logind[1179]: Removed session 11. May 17 00:43:14.468390 sshd[3341]: Accepted publickey for core from 147.75.109.163 port 47720 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:14.472870 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:14.481235 systemd[1]: Started session-12.scope. May 17 00:43:14.482053 systemd-logind[1179]: New session 12 of user core. May 17 00:43:14.695181 sshd[3341]: pam_unix(sshd:session): session closed for user core May 17 00:43:14.699820 systemd[1]: sshd@12-64.23.137.34:22-147.75.109.163:47720.service: Deactivated successfully. May 17 00:43:14.700824 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:43:14.702125 systemd-logind[1179]: Session 12 logged out. Waiting for processes to exit. May 17 00:43:14.703636 systemd-logind[1179]: Removed session 12. May 17 00:43:16.578385 kubelet[1897]: E0517 00:43:16.578330 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:19.706170 systemd[1]: Started sshd@13-64.23.137.34:22-147.75.109.163:47326.service. May 17 00:43:19.755322 sshd[3358]: Accepted publickey for core from 147.75.109.163 port 47326 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:19.758011 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:19.765391 systemd-logind[1179]: New session 13 of user core. May 17 00:43:19.766275 systemd[1]: Started session-13.scope. May 17 00:43:19.939184 sshd[3358]: pam_unix(sshd:session): session closed for user core May 17 00:43:19.944644 systemd[1]: sshd@13-64.23.137.34:22-147.75.109.163:47326.service: Deactivated successfully. May 17 00:43:19.945910 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:43:19.946766 systemd-logind[1179]: Session 13 logged out. Waiting for processes to exit. May 17 00:43:19.948256 systemd-logind[1179]: Removed session 13. May 17 00:43:20.577427 kubelet[1897]: E0517 00:43:20.577366 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:24.948683 systemd[1]: Started sshd@14-64.23.137.34:22-147.75.109.163:47330.service. May 17 00:43:25.003897 sshd[3370]: Accepted publickey for core from 147.75.109.163 port 47330 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:25.007430 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:25.015781 systemd-logind[1179]: New session 14 of user core. May 17 00:43:25.017253 systemd[1]: Started session-14.scope. May 17 00:43:25.191824 sshd[3370]: pam_unix(sshd:session): session closed for user core May 17 00:43:25.198128 systemd-logind[1179]: Session 14 logged out. Waiting for processes to exit. May 17 00:43:25.198895 systemd[1]: sshd@14-64.23.137.34:22-147.75.109.163:47330.service: Deactivated successfully. May 17 00:43:25.199849 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:43:25.201144 systemd-logind[1179]: Removed session 14. May 17 00:43:30.202176 systemd[1]: Started sshd@15-64.23.137.34:22-147.75.109.163:50962.service. May 17 00:43:30.262824 sshd[3382]: Accepted publickey for core from 147.75.109.163 port 50962 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:30.266778 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:30.277076 systemd[1]: Started session-15.scope. May 17 00:43:30.278119 systemd-logind[1179]: New session 15 of user core. May 17 00:43:30.457457 sshd[3382]: pam_unix(sshd:session): session closed for user core May 17 00:43:30.466908 systemd[1]: Started sshd@16-64.23.137.34:22-147.75.109.163:50972.service. May 17 00:43:30.469311 systemd[1]: sshd@15-64.23.137.34:22-147.75.109.163:50962.service: Deactivated successfully. May 17 00:43:30.470868 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:43:30.476129 systemd-logind[1179]: Session 15 logged out. Waiting for processes to exit. May 17 00:43:30.477947 systemd-logind[1179]: Removed session 15. May 17 00:43:30.523575 sshd[3393]: Accepted publickey for core from 147.75.109.163 port 50972 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:30.526280 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:30.534707 systemd[1]: Started session-16.scope. May 17 00:43:30.535375 systemd-logind[1179]: New session 16 of user core. May 17 00:43:30.916220 sshd[3393]: pam_unix(sshd:session): session closed for user core May 17 00:43:30.923935 systemd[1]: Started sshd@17-64.23.137.34:22-147.75.109.163:50988.service. May 17 00:43:30.932639 systemd[1]: sshd@16-64.23.137.34:22-147.75.109.163:50972.service: Deactivated successfully. May 17 00:43:30.934561 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:43:30.935741 systemd-logind[1179]: Session 16 logged out. Waiting for processes to exit. May 17 00:43:30.939406 systemd-logind[1179]: Removed session 16. May 17 00:43:30.988891 sshd[3403]: Accepted publickey for core from 147.75.109.163 port 50988 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:30.990977 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:30.998473 systemd[1]: Started session-17.scope. May 17 00:43:30.999794 systemd-logind[1179]: New session 17 of user core. May 17 00:43:32.115189 sshd[3403]: pam_unix(sshd:session): session closed for user core May 17 00:43:32.128188 systemd[1]: Started sshd@18-64.23.137.34:22-147.75.109.163:51000.service. May 17 00:43:32.130595 systemd[1]: sshd@17-64.23.137.34:22-147.75.109.163:50988.service: Deactivated successfully. May 17 00:43:32.133330 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:43:32.136565 systemd-logind[1179]: Session 17 logged out. Waiting for processes to exit. May 17 00:43:32.138867 systemd-logind[1179]: Removed session 17. May 17 00:43:32.189540 sshd[3418]: Accepted publickey for core from 147.75.109.163 port 51000 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:32.191878 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:32.198857 systemd-logind[1179]: New session 18 of user core. May 17 00:43:32.200183 systemd[1]: Started session-18.scope. May 17 00:43:32.600835 sshd[3418]: pam_unix(sshd:session): session closed for user core May 17 00:43:32.612465 systemd[1]: Started sshd@19-64.23.137.34:22-147.75.109.163:51016.service. May 17 00:43:32.613456 systemd[1]: sshd@18-64.23.137.34:22-147.75.109.163:51000.service: Deactivated successfully. May 17 00:43:32.615258 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:43:32.617708 systemd-logind[1179]: Session 18 logged out. Waiting for processes to exit. May 17 00:43:32.620860 systemd-logind[1179]: Removed session 18. May 17 00:43:32.677753 sshd[3429]: Accepted publickey for core from 147.75.109.163 port 51016 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:32.680124 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:32.686856 systemd[1]: Started session-19.scope. May 17 00:43:32.687478 systemd-logind[1179]: New session 19 of user core. May 17 00:43:32.877454 sshd[3429]: pam_unix(sshd:session): session closed for user core May 17 00:43:32.882568 systemd-logind[1179]: Session 19 logged out. Waiting for processes to exit. May 17 00:43:32.883607 systemd[1]: sshd@19-64.23.137.34:22-147.75.109.163:51016.service: Deactivated successfully. May 17 00:43:32.884929 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:43:32.886053 systemd-logind[1179]: Removed session 19. May 17 00:43:33.578328 kubelet[1897]: E0517 00:43:33.578261 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:37.888842 systemd[1]: Started sshd@20-64.23.137.34:22-147.75.109.163:51032.service. May 17 00:43:37.942564 sshd[3442]: Accepted publickey for core from 147.75.109.163 port 51032 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:37.944796 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:37.952610 systemd-logind[1179]: New session 20 of user core. May 17 00:43:37.953016 systemd[1]: Started session-20.scope. May 17 00:43:38.138715 sshd[3442]: pam_unix(sshd:session): session closed for user core May 17 00:43:38.144068 systemd-logind[1179]: Session 20 logged out. Waiting for processes to exit. May 17 00:43:38.145151 systemd[1]: sshd@20-64.23.137.34:22-147.75.109.163:51032.service: Deactivated successfully. May 17 00:43:38.146409 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:43:38.149585 systemd-logind[1179]: Removed session 20. May 17 00:43:41.580098 kubelet[1897]: E0517 00:43:41.580047 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:41.580918 kubelet[1897]: E0517 00:43:41.580636 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:43.147402 systemd[1]: Started sshd@21-64.23.137.34:22-147.75.109.163:35030.service. May 17 00:43:43.193896 sshd[3457]: Accepted publickey for core from 147.75.109.163 port 35030 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:43.194853 sshd[3457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:43.201861 systemd[1]: Started session-21.scope. May 17 00:43:43.202526 systemd-logind[1179]: New session 21 of user core. May 17 00:43:43.371479 sshd[3457]: pam_unix(sshd:session): session closed for user core May 17 00:43:43.376549 systemd[1]: sshd@21-64.23.137.34:22-147.75.109.163:35030.service: Deactivated successfully. May 17 00:43:43.377775 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:43:43.379525 systemd-logind[1179]: Session 21 logged out. Waiting for processes to exit. May 17 00:43:43.381362 systemd-logind[1179]: Removed session 21. May 17 00:43:48.380980 systemd[1]: Started sshd@22-64.23.137.34:22-147.75.109.163:52970.service. May 17 00:43:48.434865 sshd[3469]: Accepted publickey for core from 147.75.109.163 port 52970 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:48.437784 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:48.444589 systemd-logind[1179]: New session 22 of user core. May 17 00:43:48.446335 systemd[1]: Started session-22.scope. May 17 00:43:48.601325 sshd[3469]: pam_unix(sshd:session): session closed for user core May 17 00:43:48.605793 systemd[1]: sshd@22-64.23.137.34:22-147.75.109.163:52970.service: Deactivated successfully. May 17 00:43:48.606804 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:43:48.607633 systemd-logind[1179]: Session 22 logged out. Waiting for processes to exit. May 17 00:43:48.608757 systemd-logind[1179]: Removed session 22. May 17 00:43:53.609559 systemd[1]: Started sshd@23-64.23.137.34:22-147.75.109.163:52982.service. May 17 00:43:53.660018 sshd[3483]: Accepted publickey for core from 147.75.109.163 port 52982 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:53.665031 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:53.673852 systemd-logind[1179]: New session 23 of user core. May 17 00:43:53.674859 systemd[1]: Started session-23.scope. May 17 00:43:53.851162 sshd[3483]: pam_unix(sshd:session): session closed for user core May 17 00:43:53.858034 systemd[1]: sshd@23-64.23.137.34:22-147.75.109.163:52982.service: Deactivated successfully. May 17 00:43:53.859875 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:43:53.861112 systemd-logind[1179]: Session 23 logged out. Waiting for processes to exit. May 17 00:43:53.863280 systemd-logind[1179]: Removed session 23. May 17 00:43:56.578268 kubelet[1897]: E0517 00:43:56.578207 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:43:58.860350 systemd[1]: Started sshd@24-64.23.137.34:22-147.75.109.163:37482.service. May 17 00:43:58.908437 sshd[3495]: Accepted publickey for core from 147.75.109.163 port 37482 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:58.912020 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:58.919754 systemd-logind[1179]: New session 24 of user core. May 17 00:43:58.921588 systemd[1]: Started session-24.scope. May 17 00:43:59.081495 sshd[3495]: pam_unix(sshd:session): session closed for user core May 17 00:43:59.090750 systemd[1]: Started sshd@25-64.23.137.34:22-147.75.109.163:37498.service. May 17 00:43:59.092377 systemd[1]: sshd@24-64.23.137.34:22-147.75.109.163:37482.service: Deactivated successfully. May 17 00:43:59.094041 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:43:59.095736 systemd-logind[1179]: Session 24 logged out. Waiting for processes to exit. May 17 00:43:59.096879 systemd-logind[1179]: Removed session 24. May 17 00:43:59.145575 sshd[3506]: Accepted publickey for core from 147.75.109.163 port 37498 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:43:59.148795 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:43:59.156054 systemd-logind[1179]: New session 25 of user core. May 17 00:43:59.157833 systemd[1]: Started session-25.scope. May 17 00:44:01.558908 kubelet[1897]: I0517 00:44:01.558781 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h95nk" podStartSLOduration=105.558704406 podStartE2EDuration="1m45.558704406s" podCreationTimestamp="2025-05-17 00:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:42:42.843836064 +0000 UTC m=+31.681256117" watchObservedRunningTime="2025-05-17 00:44:01.558704406 +0000 UTC m=+110.396124458" May 17 00:44:01.624484 systemd[1]: run-containerd-runc-k8s.io-0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6-runc.3OeoKk.mount: Deactivated successfully. May 17 00:44:01.682101 env[1190]: time="2025-05-17T00:44:01.681949043Z" level=info msg="StopContainer for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" with timeout 30 (s)" May 17 00:44:01.692570 env[1190]: time="2025-05-17T00:44:01.691588361Z" level=info msg="Stop container \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" with signal terminated" May 17 00:44:01.715616 env[1190]: time="2025-05-17T00:44:01.713037272Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:44:01.726374 env[1190]: time="2025-05-17T00:44:01.726268079Z" level=info msg="StopContainer for \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\" with timeout 2 (s)" May 17 00:44:01.728305 env[1190]: time="2025-05-17T00:44:01.728218403Z" level=info msg="Stop container \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\" with signal terminated" May 17 00:44:01.730139 systemd[1]: cri-containerd-51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630.scope: Deactivated successfully. May 17 00:44:01.749085 systemd-networkd[1010]: lxc_health: Link DOWN May 17 00:44:01.749102 systemd-networkd[1010]: lxc_health: Lost carrier May 17 00:44:01.788744 systemd[1]: cri-containerd-0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6.scope: Deactivated successfully. May 17 00:44:01.789175 systemd[1]: cri-containerd-0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6.scope: Consumed 9.129s CPU time. May 17 00:44:01.813788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630-rootfs.mount: Deactivated successfully. May 17 00:44:01.825094 env[1190]: time="2025-05-17T00:44:01.824939616Z" level=info msg="shim disconnected" id=51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630 May 17 00:44:01.825094 env[1190]: time="2025-05-17T00:44:01.825020687Z" level=warning msg="cleaning up after shim disconnected" id=51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630 namespace=k8s.io May 17 00:44:01.825094 env[1190]: time="2025-05-17T00:44:01.825035853Z" level=info msg="cleaning up dead shim" May 17 00:44:01.847789 env[1190]: time="2025-05-17T00:44:01.847558336Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3567 runtime=io.containerd.runc.v2\n" May 17 00:44:01.851855 env[1190]: time="2025-05-17T00:44:01.851784103Z" level=info msg="StopContainer for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" returns successfully" May 17 00:44:01.853212 env[1190]: time="2025-05-17T00:44:01.853144356Z" level=info msg="StopPodSandbox for \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\"" May 17 00:44:01.853640 env[1190]: time="2025-05-17T00:44:01.853581690Z" level=info msg="Container to stop \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:01.858692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140-shm.mount: Deactivated successfully. May 17 00:44:01.878467 env[1190]: time="2025-05-17T00:44:01.878386419Z" level=info msg="shim disconnected" id=0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6 May 17 00:44:01.879098 env[1190]: time="2025-05-17T00:44:01.879047259Z" level=warning msg="cleaning up after shim disconnected" id=0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6 namespace=k8s.io May 17 00:44:01.879332 env[1190]: time="2025-05-17T00:44:01.879301917Z" level=info msg="cleaning up dead shim" May 17 00:44:01.891378 systemd[1]: cri-containerd-ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140.scope: Deactivated successfully. May 17 00:44:01.911589 env[1190]: time="2025-05-17T00:44:01.911417538Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3594 runtime=io.containerd.runc.v2\n" May 17 00:44:01.916225 env[1190]: time="2025-05-17T00:44:01.916137485Z" level=info msg="StopContainer for \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\" returns successfully" May 17 00:44:01.917651 env[1190]: time="2025-05-17T00:44:01.917562162Z" level=info msg="StopPodSandbox for \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\"" May 17 00:44:01.917894 env[1190]: time="2025-05-17T00:44:01.917743170Z" level=info msg="Container to stop \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:01.917894 env[1190]: time="2025-05-17T00:44:01.917780385Z" level=info msg="Container to stop \"9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:01.917894 env[1190]: time="2025-05-17T00:44:01.917798902Z" level=info msg="Container to stop \"84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:01.917894 env[1190]: time="2025-05-17T00:44:01.917817205Z" level=info msg="Container to stop \"9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:01.917894 env[1190]: time="2025-05-17T00:44:01.917836418Z" level=info msg="Container to stop \"f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:01.936064 systemd[1]: cri-containerd-1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79.scope: Deactivated successfully. May 17 00:44:01.953745 env[1190]: time="2025-05-17T00:44:01.953658992Z" level=info msg="shim disconnected" id=ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140 May 17 00:44:01.954710 env[1190]: time="2025-05-17T00:44:01.954645831Z" level=warning msg="cleaning up after shim disconnected" id=ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140 namespace=k8s.io May 17 00:44:01.954995 env[1190]: time="2025-05-17T00:44:01.954958448Z" level=info msg="cleaning up dead shim" May 17 00:44:01.974603 env[1190]: time="2025-05-17T00:44:01.974445594Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3628 runtime=io.containerd.runc.v2\n" May 17 00:44:01.975309 env[1190]: time="2025-05-17T00:44:01.975242701Z" level=info msg="TearDown network for sandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" successfully" May 17 00:44:01.975309 env[1190]: time="2025-05-17T00:44:01.975302057Z" level=info msg="StopPodSandbox for \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" returns successfully" May 17 00:44:02.014769 env[1190]: time="2025-05-17T00:44:02.014546052Z" level=info msg="shim disconnected" id=1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79 May 17 00:44:02.014769 env[1190]: time="2025-05-17T00:44:02.014764849Z" level=warning msg="cleaning up after shim disconnected" id=1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79 namespace=k8s.io May 17 00:44:02.015119 env[1190]: time="2025-05-17T00:44:02.014789244Z" level=info msg="cleaning up dead shim" May 17 00:44:02.052188 env[1190]: time="2025-05-17T00:44:02.052102824Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3654 runtime=io.containerd.runc.v2\n" May 17 00:44:02.053371 kubelet[1897]: I0517 00:44:02.053300 1897 scope.go:117] "RemoveContainer" containerID="51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630" May 17 00:44:02.054400 env[1190]: time="2025-05-17T00:44:02.054248324Z" level=info msg="TearDown network for sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" successfully" May 17 00:44:02.054788 env[1190]: time="2025-05-17T00:44:02.054723872Z" level=info msg="StopPodSandbox for \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" returns successfully" May 17 00:44:02.069983 env[1190]: time="2025-05-17T00:44:02.067287878Z" level=info msg="RemoveContainer for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\"" May 17 00:44:02.089244 env[1190]: time="2025-05-17T00:44:02.089155674Z" level=info msg="RemoveContainer for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" returns successfully" May 17 00:44:02.097560 kubelet[1897]: I0517 00:44:02.093572 1897 scope.go:117] "RemoveContainer" containerID="51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630" May 17 00:44:02.102374 env[1190]: time="2025-05-17T00:44:02.102181407Z" level=error msg="ContainerStatus for \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\": not found" May 17 00:44:02.109404 kubelet[1897]: E0517 00:44:02.109295 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\": not found" containerID="51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630" May 17 00:44:02.112621 kubelet[1897]: I0517 00:44:02.112336 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630"} err="failed to get container status \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\": rpc error: code = NotFound desc = an error occurred when try to find container \"51ede79b09a22c547dae8bb59327a4f6386e735081b00de6503d95b1bfd40630\": not found" May 17 00:44:02.139935 kubelet[1897]: I0517 00:44:02.139837 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlghb\" (UniqueName: \"kubernetes.io/projected/0d1300f0-3cfc-4b28-9463-e44841136d21-kube-api-access-rlghb\") pod \"0d1300f0-3cfc-4b28-9463-e44841136d21\" (UID: \"0d1300f0-3cfc-4b28-9463-e44841136d21\") " May 17 00:44:02.141247 kubelet[1897]: I0517 00:44:02.141192 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d1300f0-3cfc-4b28-9463-e44841136d21-cilium-config-path\") pod \"0d1300f0-3cfc-4b28-9463-e44841136d21\" (UID: \"0d1300f0-3cfc-4b28-9463-e44841136d21\") " May 17 00:44:02.161764 kubelet[1897]: I0517 00:44:02.153637 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d1300f0-3cfc-4b28-9463-e44841136d21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d1300f0-3cfc-4b28-9463-e44841136d21" (UID: "0d1300f0-3cfc-4b28-9463-e44841136d21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:44:02.189842 kubelet[1897]: I0517 00:44:02.189705 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1300f0-3cfc-4b28-9463-e44841136d21-kube-api-access-rlghb" (OuterVolumeSpecName: "kube-api-access-rlghb") pod "0d1300f0-3cfc-4b28-9463-e44841136d21" (UID: "0d1300f0-3cfc-4b28-9463-e44841136d21"). InnerVolumeSpecName "kube-api-access-rlghb". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:02.243484 kubelet[1897]: I0517 00:44:02.243413 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/005ae1be-c53d-4f26-8325-4828c561090f-clustermesh-secrets\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.243741 kubelet[1897]: I0517 00:44:02.243549 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-xtables-lock\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.243847 kubelet[1897]: I0517 00:44:02.243764 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-cgroup\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.243938 kubelet[1897]: I0517 00:44:02.243827 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-hostproc\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.243938 kubelet[1897]: I0517 00:44:02.243907 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-kernel\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244050 kubelet[1897]: I0517 00:44:02.243937 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cni-path\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244050 kubelet[1897]: I0517 00:44:02.243985 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005ae1be-c53d-4f26-8325-4828c561090f-cilium-config-path\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244050 kubelet[1897]: I0517 00:44:02.244013 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-etc-cni-netd\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244050 kubelet[1897]: I0517 00:44:02.244039 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-hubble-tls\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244262 kubelet[1897]: I0517 00:44:02.244114 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-bpf-maps\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244262 kubelet[1897]: I0517 00:44:02.244156 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-net\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244262 kubelet[1897]: I0517 00:44:02.244182 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-lib-modules\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244452 kubelet[1897]: I0517 00:44:02.244340 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ht7\" (UniqueName: \"kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-kube-api-access-94ht7\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244452 kubelet[1897]: I0517 00:44:02.244384 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-run\") pod \"005ae1be-c53d-4f26-8325-4828c561090f\" (UID: \"005ae1be-c53d-4f26-8325-4828c561090f\") " May 17 00:44:02.244587 kubelet[1897]: I0517 00:44:02.244491 1897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rlghb\" (UniqueName: \"kubernetes.io/projected/0d1300f0-3cfc-4b28-9463-e44841136d21-kube-api-access-rlghb\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.244587 kubelet[1897]: I0517 00:44:02.244533 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d1300f0-3cfc-4b28-9463-e44841136d21-cilium-config-path\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.244799 kubelet[1897]: I0517 00:44:02.244603 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.244799 kubelet[1897]: I0517 00:44:02.244687 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.244799 kubelet[1897]: I0517 00:44:02.244714 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.244799 kubelet[1897]: I0517 00:44:02.244736 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-hostproc" (OuterVolumeSpecName: "hostproc") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.244799 kubelet[1897]: I0517 00:44:02.244772 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.245090 kubelet[1897]: I0517 00:44:02.244796 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cni-path" (OuterVolumeSpecName: "cni-path") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.247749 kubelet[1897]: I0517 00:44:02.247675 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.248108 kubelet[1897]: I0517 00:44:02.248062 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.248208 kubelet[1897]: I0517 00:44:02.247855 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.248208 kubelet[1897]: I0517 00:44:02.248150 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:02.251413 kubelet[1897]: I0517 00:44:02.251332 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/005ae1be-c53d-4f26-8325-4828c561090f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:44:02.254951 kubelet[1897]: I0517 00:44:02.254825 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/005ae1be-c53d-4f26-8325-4828c561090f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:44:02.258117 kubelet[1897]: I0517 00:44:02.258017 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-kube-api-access-94ht7" (OuterVolumeSpecName: "kube-api-access-94ht7") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "kube-api-access-94ht7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:02.261893 kubelet[1897]: I0517 00:44:02.261802 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "005ae1be-c53d-4f26-8325-4828c561090f" (UID: "005ae1be-c53d-4f26-8325-4828c561090f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:02.342133 systemd[1]: Removed slice kubepods-besteffort-pod0d1300f0_3cfc_4b28_9463_e44841136d21.slice. May 17 00:44:02.348806 kubelet[1897]: I0517 00:44:02.348766 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-cgroup\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349045 kubelet[1897]: I0517 00:44:02.349024 1897 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-hostproc\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349205 kubelet[1897]: I0517 00:44:02.349182 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349350 kubelet[1897]: I0517 00:44:02.349331 1897 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cni-path\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349459 kubelet[1897]: I0517 00:44:02.349441 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/005ae1be-c53d-4f26-8325-4828c561090f-cilium-config-path\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349691 kubelet[1897]: I0517 00:44:02.349665 1897 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-etc-cni-netd\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349841 kubelet[1897]: I0517 00:44:02.349823 1897 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-hubble-tls\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.349948 kubelet[1897]: I0517 00:44:02.349931 1897 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-bpf-maps\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.350810 kubelet[1897]: I0517 00:44:02.350773 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-host-proc-sys-net\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.351691 kubelet[1897]: I0517 00:44:02.351625 1897 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-lib-modules\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.355186 kubelet[1897]: I0517 00:44:02.354668 1897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94ht7\" (UniqueName: \"kubernetes.io/projected/005ae1be-c53d-4f26-8325-4828c561090f-kube-api-access-94ht7\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.355186 kubelet[1897]: I0517 00:44:02.354898 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-cilium-run\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.355186 kubelet[1897]: I0517 00:44:02.354956 1897 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/005ae1be-c53d-4f26-8325-4828c561090f-clustermesh-secrets\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.355186 kubelet[1897]: I0517 00:44:02.354972 1897 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/005ae1be-c53d-4f26-8325-4828c561090f-xtables-lock\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:02.602387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6-rootfs.mount: Deactivated successfully. May 17 00:44:02.603459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140-rootfs.mount: Deactivated successfully. May 17 00:44:02.603615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79-rootfs.mount: Deactivated successfully. May 17 00:44:02.603717 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79-shm.mount: Deactivated successfully. May 17 00:44:02.603832 systemd[1]: var-lib-kubelet-pods-0d1300f0\x2d3cfc\x2d4b28\x2d9463\x2de44841136d21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drlghb.mount: Deactivated successfully. May 17 00:44:02.603949 systemd[1]: var-lib-kubelet-pods-005ae1be\x2dc53d\x2d4f26\x2d8325\x2d4828c561090f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94ht7.mount: Deactivated successfully. May 17 00:44:02.604055 systemd[1]: var-lib-kubelet-pods-005ae1be\x2dc53d\x2d4f26\x2d8325\x2d4828c561090f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:44:02.604166 systemd[1]: var-lib-kubelet-pods-005ae1be\x2dc53d\x2d4f26\x2d8325\x2d4828c561090f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:44:03.081893 kubelet[1897]: I0517 00:44:03.081822 1897 scope.go:117] "RemoveContainer" containerID="0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6" May 17 00:44:03.087407 systemd[1]: Removed slice kubepods-burstable-pod005ae1be_c53d_4f26_8325_4828c561090f.slice. May 17 00:44:03.087620 systemd[1]: kubepods-burstable-pod005ae1be_c53d_4f26_8325_4828c561090f.slice: Consumed 9.307s CPU time. May 17 00:44:03.102130 env[1190]: time="2025-05-17T00:44:03.102030292Z" level=info msg="RemoveContainer for \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\"" May 17 00:44:03.107119 env[1190]: time="2025-05-17T00:44:03.105731909Z" level=info msg="RemoveContainer for \"0e9d6795e0aba282f8a26aec17112f948431049d49be3130077090090c9164a6\" returns successfully" May 17 00:44:03.107401 kubelet[1897]: I0517 00:44:03.106143 1897 scope.go:117] "RemoveContainer" containerID="9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949" May 17 00:44:03.108045 env[1190]: time="2025-05-17T00:44:03.107994881Z" level=info msg="RemoveContainer for \"9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949\"" May 17 00:44:03.115762 env[1190]: time="2025-05-17T00:44:03.115478312Z" level=info msg="RemoveContainer for \"9b0dacff79c6668f5c7ebf39e01063a40306407a55b9abac62699dc7f94de949\" returns successfully" May 17 00:44:03.116411 kubelet[1897]: I0517 00:44:03.116358 1897 scope.go:117] "RemoveContainer" containerID="9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998" May 17 00:44:03.121326 env[1190]: time="2025-05-17T00:44:03.120424703Z" level=info msg="RemoveContainer for \"9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998\"" May 17 00:44:03.125473 env[1190]: time="2025-05-17T00:44:03.125376460Z" level=info msg="RemoveContainer for \"9dcd6ea6eccc122d5825b718c60f88f8c11bd5046b3532ab158d2921fe0a6998\" returns successfully" May 17 00:44:03.126226 kubelet[1897]: I0517 00:44:03.126176 1897 scope.go:117] "RemoveContainer" containerID="84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95" May 17 00:44:03.132058 env[1190]: time="2025-05-17T00:44:03.131611338Z" level=info msg="RemoveContainer for \"84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95\"" May 17 00:44:03.137009 env[1190]: time="2025-05-17T00:44:03.136924491Z" level=info msg="RemoveContainer for \"84853fd3a18f184411a25725a671acd75e61cef9b3407956151c16dd2f573c95\" returns successfully" May 17 00:44:03.139163 kubelet[1897]: I0517 00:44:03.139106 1897 scope.go:117] "RemoveContainer" containerID="f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4" May 17 00:44:03.143216 env[1190]: time="2025-05-17T00:44:03.143146749Z" level=info msg="RemoveContainer for \"f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4\"" May 17 00:44:03.149059 env[1190]: time="2025-05-17T00:44:03.148993570Z" level=info msg="RemoveContainer for \"f6890e9f987dd394d978de83cf7213ed7669a3b895f0653fa3083da0431a6da4\" returns successfully" May 17 00:44:03.451078 sshd[3506]: pam_unix(sshd:session): session closed for user core May 17 00:44:03.462079 systemd[1]: Started sshd@26-64.23.137.34:22-147.75.109.163:37514.service. May 17 00:44:03.465933 systemd[1]: sshd@25-64.23.137.34:22-147.75.109.163:37498.service: Deactivated successfully. May 17 00:44:03.467438 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:44:03.468130 systemd[1]: session-25.scope: Consumed 1.512s CPU time. May 17 00:44:03.469071 systemd-logind[1179]: Session 25 logged out. Waiting for processes to exit. May 17 00:44:03.471030 systemd-logind[1179]: Removed session 25. May 17 00:44:03.533422 sshd[3673]: Accepted publickey for core from 147.75.109.163 port 37514 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:03.536335 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:03.551463 systemd[1]: Started session-26.scope. May 17 00:44:03.552269 systemd-logind[1179]: New session 26 of user core. May 17 00:44:03.580937 kubelet[1897]: I0517 00:44:03.580851 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005ae1be-c53d-4f26-8325-4828c561090f" path="/var/lib/kubelet/pods/005ae1be-c53d-4f26-8325-4828c561090f/volumes" May 17 00:44:03.582173 kubelet[1897]: I0517 00:44:03.582100 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1300f0-3cfc-4b28-9463-e44841136d21" path="/var/lib/kubelet/pods/0d1300f0-3cfc-4b28-9463-e44841136d21/volumes" May 17 00:44:04.432965 sshd[3673]: pam_unix(sshd:session): session closed for user core May 17 00:44:04.442011 systemd[1]: Started sshd@27-64.23.137.34:22-147.75.109.163:37524.service. May 17 00:44:04.451459 systemd[1]: sshd@26-64.23.137.34:22-147.75.109.163:37514.service: Deactivated successfully. May 17 00:44:04.454480 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:44:04.456440 systemd-logind[1179]: Session 26 logged out. Waiting for processes to exit. May 17 00:44:04.467853 systemd-logind[1179]: Removed session 26. May 17 00:44:04.508909 systemd[1]: Created slice kubepods-burstable-podab65d96f_3657_4ba8_9910_4595233cbd7f.slice. May 17 00:44:04.535017 sshd[3683]: Accepted publickey for core from 147.75.109.163 port 37524 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:04.537763 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:04.549778 systemd-logind[1179]: New session 27 of user core. May 17 00:44:04.551154 systemd[1]: Started session-27.scope. May 17 00:44:04.678879 kubelet[1897]: I0517 00:44:04.678815 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cni-path\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.684748 kubelet[1897]: I0517 00:44:04.684569 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-ipsec-secrets\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.685036 kubelet[1897]: I0517 00:44:04.685007 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-hostproc\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.685183 kubelet[1897]: I0517 00:44:04.685159 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-cgroup\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.685270 kubelet[1897]: I0517 00:44:04.685251 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-etc-cni-netd\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.685358 kubelet[1897]: I0517 00:44:04.685344 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-kernel\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.685433 kubelet[1897]: I0517 00:44:04.685419 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-hubble-tls\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686101 kubelet[1897]: I0517 00:44:04.686062 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-bpf-maps\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686309 kubelet[1897]: I0517 00:44:04.686264 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-net\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686463 kubelet[1897]: I0517 00:44:04.686438 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-run\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686610 kubelet[1897]: I0517 00:44:04.686594 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-lib-modules\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686689 kubelet[1897]: I0517 00:44:04.686675 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-config-path\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686772 kubelet[1897]: I0517 00:44:04.686758 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-xtables-lock\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686856 kubelet[1897]: I0517 00:44:04.686843 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-clustermesh-secrets\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.686964 kubelet[1897]: I0517 00:44:04.686948 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tc8\" (UniqueName: \"kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-kube-api-access-n5tc8\") pod \"cilium-tpvp8\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " pod="kube-system/cilium-tpvp8" May 17 00:44:04.877402 sshd[3683]: pam_unix(sshd:session): session closed for user core May 17 00:44:04.892679 systemd[1]: Started sshd@28-64.23.137.34:22-147.75.109.163:37528.service. May 17 00:44:04.893973 systemd[1]: sshd@27-64.23.137.34:22-147.75.109.163:37524.service: Deactivated successfully. May 17 00:44:04.896374 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:44:04.898857 systemd-logind[1179]: Session 27 logged out. Waiting for processes to exit. May 17 00:44:04.902269 systemd-logind[1179]: Removed session 27. May 17 00:44:04.922544 kubelet[1897]: E0517 00:44:04.921903 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:04.924193 env[1190]: time="2025-05-17T00:44:04.924116881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpvp8,Uid:ab65d96f-3657-4ba8-9910-4595233cbd7f,Namespace:kube-system,Attempt:0,}" May 17 00:44:04.963453 env[1190]: time="2025-05-17T00:44:04.962963409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:04.963453 env[1190]: time="2025-05-17T00:44:04.963162026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:04.963453 env[1190]: time="2025-05-17T00:44:04.963176243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:04.969615 env[1190]: time="2025-05-17T00:44:04.965334499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0 pid=3709 runtime=io.containerd.runc.v2 May 17 00:44:04.986252 sshd[3699]: Accepted publickey for core from 147.75.109.163 port 37528 ssh2: RSA SHA256:EX9BYXX2dlhNNVyZl0biBFe+Nt3dwNpfc+iXRVj1d0w May 17 00:44:04.989127 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:44:05.001084 systemd[1]: Started session-28.scope. May 17 00:44:05.002217 systemd-logind[1179]: New session 28 of user core. May 17 00:44:05.011917 systemd[1]: Started cri-containerd-e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0.scope. May 17 00:44:05.067042 env[1190]: time="2025-05-17T00:44:05.066909618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tpvp8,Uid:ab65d96f-3657-4ba8-9910-4595233cbd7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\"" May 17 00:44:05.068878 kubelet[1897]: E0517 00:44:05.068541 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:05.087353 env[1190]: time="2025-05-17T00:44:05.087282798Z" level=info msg="CreateContainer within sandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:44:05.107172 env[1190]: time="2025-05-17T00:44:05.107063455Z" level=info msg="CreateContainer within sandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\"" May 17 00:44:05.108378 env[1190]: time="2025-05-17T00:44:05.108307193Z" level=info msg="StartContainer for \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\"" May 17 00:44:05.146882 systemd[1]: Started cri-containerd-208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba.scope. May 17 00:44:05.171217 systemd[1]: cri-containerd-208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba.scope: Deactivated successfully. May 17 00:44:05.191718 env[1190]: time="2025-05-17T00:44:05.191649042Z" level=info msg="shim disconnected" id=208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba May 17 00:44:05.192168 env[1190]: time="2025-05-17T00:44:05.192117550Z" level=warning msg="cleaning up after shim disconnected" id=208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba namespace=k8s.io May 17 00:44:05.192336 env[1190]: time="2025-05-17T00:44:05.192315511Z" level=info msg="cleaning up dead shim" May 17 00:44:05.215758 env[1190]: time="2025-05-17T00:44:05.214631107Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3775 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:44:05Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:44:05.217864 env[1190]: time="2025-05-17T00:44:05.217710642Z" level=error msg="Failed to pipe stdout of container \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\"" error="reading from a closed fifo" May 17 00:44:05.220394 env[1190]: time="2025-05-17T00:44:05.220327570Z" level=error msg="Failed to pipe stderr of container \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\"" error="reading from a closed fifo" May 17 00:44:05.221215 env[1190]: time="2025-05-17T00:44:05.216996386Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" May 17 00:44:05.223368 env[1190]: time="2025-05-17T00:44:05.223290254Z" level=error msg="StartContainer for \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:44:05.227051 kubelet[1897]: E0517 00:44:05.223984 1897 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba" May 17 00:44:05.227051 kubelet[1897]: E0517 00:44:05.224892 1897 kuberuntime_manager.go:1358] "Unhandled Error" err=< May 17 00:44:05.227051 kubelet[1897]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:44:05.227051 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:44:05.227051 kubelet[1897]: rm /hostbin/cilium-mount May 17 00:44:05.227450 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5tc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-tpvp8_kube-system(ab65d96f-3657-4ba8-9910-4595233cbd7f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:44:05.227450 kubelet[1897]: > logger="UnhandledError" May 17 00:44:05.227735 kubelet[1897]: E0517 00:44:05.226950 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-tpvp8" podUID="ab65d96f-3657-4ba8-9910-4595233cbd7f" May 17 00:44:06.110942 env[1190]: time="2025-05-17T00:44:06.110837488Z" level=info msg="StopPodSandbox for \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\"" May 17 00:44:06.112002 env[1190]: time="2025-05-17T00:44:06.111898631Z" level=info msg="Container to stop \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:44:06.116951 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0-shm.mount: Deactivated successfully. May 17 00:44:06.149706 systemd[1]: cri-containerd-e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0.scope: Deactivated successfully. May 17 00:44:06.185252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0-rootfs.mount: Deactivated successfully. May 17 00:44:06.191156 env[1190]: time="2025-05-17T00:44:06.191073444Z" level=info msg="shim disconnected" id=e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0 May 17 00:44:06.191156 env[1190]: time="2025-05-17T00:44:06.191150758Z" level=warning msg="cleaning up after shim disconnected" id=e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0 namespace=k8s.io May 17 00:44:06.191156 env[1190]: time="2025-05-17T00:44:06.191166894Z" level=info msg="cleaning up dead shim" May 17 00:44:06.207860 env[1190]: time="2025-05-17T00:44:06.207778826Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3809 runtime=io.containerd.runc.v2\n" May 17 00:44:06.208446 env[1190]: time="2025-05-17T00:44:06.208390538Z" level=info msg="TearDown network for sandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" successfully" May 17 00:44:06.208643 env[1190]: time="2025-05-17T00:44:06.208447267Z" level=info msg="StopPodSandbox for \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" returns successfully" May 17 00:44:06.333868 kubelet[1897]: I0517 00:44:06.333014 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-clustermesh-secrets\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.334877 kubelet[1897]: I0517 00:44:06.334790 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cni-path\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335038 kubelet[1897]: I0517 00:44:06.334888 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-kernel\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335038 kubelet[1897]: I0517 00:44:06.334914 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-run\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335038 kubelet[1897]: I0517 00:44:06.334943 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-config-path\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335038 kubelet[1897]: I0517 00:44:06.334971 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-net\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335038 kubelet[1897]: I0517 00:44:06.335000 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-lib-modules\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335038 kubelet[1897]: I0517 00:44:06.335024 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-xtables-lock\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335350 kubelet[1897]: I0517 00:44:06.335057 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-cgroup\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335350 kubelet[1897]: I0517 00:44:06.335104 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5tc8\" (UniqueName: \"kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-kube-api-access-n5tc8\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335350 kubelet[1897]: I0517 00:44:06.335136 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-ipsec-secrets\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335350 kubelet[1897]: I0517 00:44:06.335159 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-hubble-tls\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335350 kubelet[1897]: I0517 00:44:06.335183 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-hostproc\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335350 kubelet[1897]: I0517 00:44:06.335204 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-etc-cni-netd\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335731 kubelet[1897]: I0517 00:44:06.335227 1897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-bpf-maps\") pod \"ab65d96f-3657-4ba8-9910-4595233cbd7f\" (UID: \"ab65d96f-3657-4ba8-9910-4595233cbd7f\") " May 17 00:44:06.335731 kubelet[1897]: I0517 00:44:06.335326 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.335731 kubelet[1897]: I0517 00:44:06.335372 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.335731 kubelet[1897]: I0517 00:44:06.335395 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.335731 kubelet[1897]: I0517 00:44:06.335417 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.339301 kubelet[1897]: I0517 00:44:06.339221 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.339828 kubelet[1897]: I0517 00:44:06.339777 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.340087 kubelet[1897]: I0517 00:44:06.340049 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.340245 kubelet[1897]: I0517 00:44:06.340093 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:44:06.341173 kubelet[1897]: I0517 00:44:06.341119 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.347234 systemd[1]: var-lib-kubelet-pods-ab65d96f\x2d3657\x2d4ba8\x2d9910\x2d4595233cbd7f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:44:06.354432 systemd[1]: var-lib-kubelet-pods-ab65d96f\x2d3657\x2d4ba8\x2d9910\x2d4595233cbd7f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:44:06.356290 kubelet[1897]: I0517 00:44:06.356232 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.356603 kubelet[1897]: I0517 00:44:06.356272 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:44:06.356866 kubelet[1897]: I0517 00:44:06.356821 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:44:06.357747 kubelet[1897]: I0517 00:44:06.357702 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:44:06.361553 kubelet[1897]: I0517 00:44:06.361313 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-kube-api-access-n5tc8" (OuterVolumeSpecName: "kube-api-access-n5tc8") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "kube-api-access-n5tc8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:06.364454 kubelet[1897]: I0517 00:44:06.364201 1897 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab65d96f-3657-4ba8-9910-4595233cbd7f" (UID: "ab65d96f-3657-4ba8-9910-4595233cbd7f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:44:06.435662 kubelet[1897]: I0517 00:44:06.435565 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-net\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.435662 kubelet[1897]: I0517 00:44:06.435654 1897 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-lib-modules\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.435662 kubelet[1897]: I0517 00:44:06.435685 1897 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-xtables-lock\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435699 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-cgroup\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435714 1897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n5tc8\" (UniqueName: \"kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-kube-api-access-n5tc8\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435733 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435749 1897 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab65d96f-3657-4ba8-9910-4595233cbd7f-hubble-tls\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435765 1897 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-hostproc\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435779 1897 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-etc-cni-netd\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435794 1897 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-bpf-maps\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436137 kubelet[1897]: I0517 00:44:06.435808 1897 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab65d96f-3657-4ba8-9910-4595233cbd7f-clustermesh-secrets\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436626 kubelet[1897]: I0517 00:44:06.435821 1897 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cni-path\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436626 kubelet[1897]: I0517 00:44:06.435838 1897 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436626 kubelet[1897]: I0517 00:44:06.435859 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-run\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.436626 kubelet[1897]: I0517 00:44:06.435873 1897 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab65d96f-3657-4ba8-9910-4595233cbd7f-cilium-config-path\") on node \"ci-3510.3.7-n-b5ee3a085c\" DevicePath \"\"" May 17 00:44:06.665853 kubelet[1897]: E0517 00:44:06.665572 1897 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:44:06.809498 systemd[1]: var-lib-kubelet-pods-ab65d96f\x2d3657\x2d4ba8\x2d9910\x2d4595233cbd7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn5tc8.mount: Deactivated successfully. May 17 00:44:06.810117 systemd[1]: var-lib-kubelet-pods-ab65d96f\x2d3657\x2d4ba8\x2d9910\x2d4595233cbd7f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:44:07.113798 kubelet[1897]: I0517 00:44:07.113763 1897 scope.go:117] "RemoveContainer" containerID="208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba" May 17 00:44:07.117377 env[1190]: time="2025-05-17T00:44:07.117302318Z" level=info msg="RemoveContainer for \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\"" May 17 00:44:07.123981 env[1190]: time="2025-05-17T00:44:07.120594443Z" level=info msg="RemoveContainer for \"208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba\" returns successfully" May 17 00:44:07.123183 systemd[1]: Removed slice kubepods-burstable-podab65d96f_3657_4ba8_9910_4595233cbd7f.slice. May 17 00:44:07.236405 systemd[1]: Created slice kubepods-burstable-pod17d11182_0500_4ead_8b8c_c684215c978f.slice. May 17 00:44:07.345082 kubelet[1897]: I0517 00:44:07.345005 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-xtables-lock\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.345966 kubelet[1897]: I0517 00:44:07.345911 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-hostproc\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.346296 kubelet[1897]: I0517 00:44:07.346246 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17d11182-0500-4ead-8b8c-c684215c978f-cilium-ipsec-secrets\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.346498 kubelet[1897]: I0517 00:44:07.346468 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-cilium-run\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.346697 kubelet[1897]: I0517 00:44:07.346673 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-cilium-cgroup\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.346812 kubelet[1897]: I0517 00:44:07.346782 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17d11182-0500-4ead-8b8c-c684215c978f-clustermesh-secrets\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.346898 kubelet[1897]: I0517 00:44:07.346884 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17d11182-0500-4ead-8b8c-c684215c978f-hubble-tls\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.346980 kubelet[1897]: I0517 00:44:07.346966 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-cni-path\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347057 kubelet[1897]: I0517 00:44:07.347044 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-host-proc-sys-kernel\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347146 kubelet[1897]: I0517 00:44:07.347130 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-bpf-maps\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347234 kubelet[1897]: I0517 00:44:07.347220 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-lib-modules\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347310 kubelet[1897]: I0517 00:44:07.347298 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17d11182-0500-4ead-8b8c-c684215c978f-cilium-config-path\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347387 kubelet[1897]: I0517 00:44:07.347374 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-host-proc-sys-net\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347473 kubelet[1897]: I0517 00:44:07.347457 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tp4q\" (UniqueName: \"kubernetes.io/projected/17d11182-0500-4ead-8b8c-c684215c978f-kube-api-access-5tp4q\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.347621 kubelet[1897]: I0517 00:44:07.347602 1897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17d11182-0500-4ead-8b8c-c684215c978f-etc-cni-netd\") pod \"cilium-2nwq6\" (UID: \"17d11182-0500-4ead-8b8c-c684215c978f\") " pod="kube-system/cilium-2nwq6" May 17 00:44:07.540816 kubelet[1897]: E0517 00:44:07.540725 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:07.543384 env[1190]: time="2025-05-17T00:44:07.542798753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nwq6,Uid:17d11182-0500-4ead-8b8c-c684215c978f,Namespace:kube-system,Attempt:0,}" May 17 00:44:07.562000 env[1190]: time="2025-05-17T00:44:07.561843182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:44:07.562000 env[1190]: time="2025-05-17T00:44:07.561914965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:44:07.562000 env[1190]: time="2025-05-17T00:44:07.561960582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:44:07.562950 env[1190]: time="2025-05-17T00:44:07.562852675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf pid=3838 runtime=io.containerd.runc.v2 May 17 00:44:07.581998 kubelet[1897]: I0517 00:44:07.581576 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab65d96f-3657-4ba8-9910-4595233cbd7f" path="/var/lib/kubelet/pods/ab65d96f-3657-4ba8-9910-4595233cbd7f/volumes" May 17 00:44:07.587593 systemd[1]: Started cri-containerd-d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf.scope. May 17 00:44:07.631731 env[1190]: time="2025-05-17T00:44:07.631676826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nwq6,Uid:17d11182-0500-4ead-8b8c-c684215c978f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\"" May 17 00:44:07.633564 kubelet[1897]: E0517 00:44:07.633177 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:07.644436 env[1190]: time="2025-05-17T00:44:07.644266190Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:44:07.665129 env[1190]: time="2025-05-17T00:44:07.665040004Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531\"" May 17 00:44:07.669090 env[1190]: time="2025-05-17T00:44:07.669027054Z" level=info msg="StartContainer for \"66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531\"" May 17 00:44:07.707278 systemd[1]: Started cri-containerd-66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531.scope. May 17 00:44:07.759812 env[1190]: time="2025-05-17T00:44:07.759692645Z" level=info msg="StartContainer for \"66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531\" returns successfully" May 17 00:44:07.778140 systemd[1]: cri-containerd-66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531.scope: Deactivated successfully. May 17 00:44:07.830596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531-rootfs.mount: Deactivated successfully. May 17 00:44:07.843879 env[1190]: time="2025-05-17T00:44:07.843799721Z" level=info msg="shim disconnected" id=66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531 May 17 00:44:07.843879 env[1190]: time="2025-05-17T00:44:07.843877746Z" level=warning msg="cleaning up after shim disconnected" id=66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531 namespace=k8s.io May 17 00:44:07.843879 env[1190]: time="2025-05-17T00:44:07.843891195Z" level=info msg="cleaning up dead shim" May 17 00:44:07.861742 env[1190]: time="2025-05-17T00:44:07.861654896Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3922 runtime=io.containerd.runc.v2\n" May 17 00:44:08.121358 kubelet[1897]: E0517 00:44:08.121213 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:08.138902 env[1190]: time="2025-05-17T00:44:08.138809409Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:44:08.162321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224821723.mount: Deactivated successfully. May 17 00:44:08.166749 env[1190]: time="2025-05-17T00:44:08.166652375Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f\"" May 17 00:44:08.168837 env[1190]: time="2025-05-17T00:44:08.168785266Z" level=info msg="StartContainer for \"e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f\"" May 17 00:44:08.238352 systemd[1]: Started cri-containerd-e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f.scope. May 17 00:44:08.285981 env[1190]: time="2025-05-17T00:44:08.285902720Z" level=info msg="StartContainer for \"e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f\" returns successfully" May 17 00:44:08.296766 systemd[1]: cri-containerd-e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f.scope: Deactivated successfully. May 17 00:44:08.301457 kubelet[1897]: W0517 00:44:08.300369 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab65d96f_3657_4ba8_9910_4595233cbd7f.slice/cri-containerd-208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba.scope WatchSource:0}: container "208333900297f8b937acba915f98973e9606f4a2500e185ed95679badff3afba" in namespace "k8s.io": not found May 17 00:44:08.339039 env[1190]: time="2025-05-17T00:44:08.338959197Z" level=info msg="shim disconnected" id=e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f May 17 00:44:08.339039 env[1190]: time="2025-05-17T00:44:08.339031124Z" level=warning msg="cleaning up after shim disconnected" id=e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f namespace=k8s.io May 17 00:44:08.339039 env[1190]: time="2025-05-17T00:44:08.339049613Z" level=info msg="cleaning up dead shim" May 17 00:44:08.354125 env[1190]: time="2025-05-17T00:44:08.354018298Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3984 runtime=io.containerd.runc.v2\n" May 17 00:44:08.808759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f-rootfs.mount: Deactivated successfully. May 17 00:44:09.125045 kubelet[1897]: E0517 00:44:09.124891 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:09.134501 env[1190]: time="2025-05-17T00:44:09.134412537Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:44:09.159084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752756157.mount: Deactivated successfully. May 17 00:44:09.176638 env[1190]: time="2025-05-17T00:44:09.176571145Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e\"" May 17 00:44:09.178721 env[1190]: time="2025-05-17T00:44:09.178582724Z" level=info msg="StartContainer for \"11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e\"" May 17 00:44:09.220816 systemd[1]: Started cri-containerd-11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e.scope. May 17 00:44:09.281929 env[1190]: time="2025-05-17T00:44:09.281843742Z" level=info msg="StartContainer for \"11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e\" returns successfully" May 17 00:44:09.291200 systemd[1]: cri-containerd-11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e.scope: Deactivated successfully. May 17 00:44:09.335105 env[1190]: time="2025-05-17T00:44:09.334990417Z" level=info msg="shim disconnected" id=11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e May 17 00:44:09.335665 env[1190]: time="2025-05-17T00:44:09.335616830Z" level=warning msg="cleaning up after shim disconnected" id=11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e namespace=k8s.io May 17 00:44:09.335909 env[1190]: time="2025-05-17T00:44:09.335876893Z" level=info msg="cleaning up dead shim" May 17 00:44:09.352216 env[1190]: time="2025-05-17T00:44:09.352146142Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4043 runtime=io.containerd.runc.v2\n" May 17 00:44:09.808725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e-rootfs.mount: Deactivated successfully. May 17 00:44:10.132943 kubelet[1897]: E0517 00:44:10.131619 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:10.145309 env[1190]: time="2025-05-17T00:44:10.145204813Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:44:10.164109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895831261.mount: Deactivated successfully. May 17 00:44:10.182895 env[1190]: time="2025-05-17T00:44:10.182798328Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372\"" May 17 00:44:10.184830 env[1190]: time="2025-05-17T00:44:10.184768526Z" level=info msg="StartContainer for \"554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372\"" May 17 00:44:10.212113 systemd[1]: Started cri-containerd-554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372.scope. May 17 00:44:10.259416 env[1190]: time="2025-05-17T00:44:10.259249326Z" level=info msg="StartContainer for \"554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372\" returns successfully" May 17 00:44:10.261917 systemd[1]: cri-containerd-554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372.scope: Deactivated successfully. May 17 00:44:10.297248 env[1190]: time="2025-05-17T00:44:10.297169662Z" level=info msg="shim disconnected" id=554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372 May 17 00:44:10.297811 env[1190]: time="2025-05-17T00:44:10.297761446Z" level=warning msg="cleaning up after shim disconnected" id=554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372 namespace=k8s.io May 17 00:44:10.297969 env[1190]: time="2025-05-17T00:44:10.297946570Z" level=info msg="cleaning up dead shim" May 17 00:44:10.312890 env[1190]: time="2025-05-17T00:44:10.312815762Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:44:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4102 runtime=io.containerd.runc.v2\n" May 17 00:44:10.809415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372-rootfs.mount: Deactivated successfully. May 17 00:44:11.137645 kubelet[1897]: E0517 00:44:11.137446 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:11.144081 env[1190]: time="2025-05-17T00:44:11.144017921Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:44:11.173006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217714895.mount: Deactivated successfully. May 17 00:44:11.188952 env[1190]: time="2025-05-17T00:44:11.188833951Z" level=info msg="CreateContainer within sandbox \"d53af34352f5fc9a114d2ebdcef279ef224e67837fb8dedac96a22eb7765eaaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74b7483f0f1f02590532052e4de463c731db23c8addbeaf8fe756511e719eeb8\"" May 17 00:44:11.190206 env[1190]: time="2025-05-17T00:44:11.190163191Z" level=info msg="StartContainer for \"74b7483f0f1f02590532052e4de463c731db23c8addbeaf8fe756511e719eeb8\"" May 17 00:44:11.241439 systemd[1]: Started cri-containerd-74b7483f0f1f02590532052e4de463c731db23c8addbeaf8fe756511e719eeb8.scope. May 17 00:44:11.282445 env[1190]: time="2025-05-17T00:44:11.282381900Z" level=info msg="StartContainer for \"74b7483f0f1f02590532052e4de463c731db23c8addbeaf8fe756511e719eeb8\" returns successfully" May 17 00:44:11.416589 kubelet[1897]: W0517 00:44:11.416281 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d11182_0500_4ead_8b8c_c684215c978f.slice/cri-containerd-66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531.scope WatchSource:0}: task 66e6d651c7bb1465ba74cbbef29f9eaf32cb6c9fd9258fce4d9d79dafadd3531 not found May 17 00:44:11.462616 env[1190]: time="2025-05-17T00:44:11.462525921Z" level=info msg="StopPodSandbox for \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\"" May 17 00:44:11.462888 env[1190]: time="2025-05-17T00:44:11.462673426Z" level=info msg="TearDown network for sandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" successfully" May 17 00:44:11.462888 env[1190]: time="2025-05-17T00:44:11.462720695Z" level=info msg="StopPodSandbox for \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" returns successfully" May 17 00:44:11.464610 env[1190]: time="2025-05-17T00:44:11.464549571Z" level=info msg="RemovePodSandbox for \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\"" May 17 00:44:11.464790 env[1190]: time="2025-05-17T00:44:11.464616138Z" level=info msg="Forcibly stopping sandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\"" May 17 00:44:11.464790 env[1190]: time="2025-05-17T00:44:11.464745128Z" level=info msg="TearDown network for sandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" successfully" May 17 00:44:11.470020 env[1190]: time="2025-05-17T00:44:11.469953991Z" level=info msg="RemovePodSandbox \"ec5d134b71a952d33c940f82e2ff79635fe19002d631a22aeddaed98fa41f140\" returns successfully" May 17 00:44:11.472414 env[1190]: time="2025-05-17T00:44:11.472259013Z" level=info msg="StopPodSandbox for \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\"" May 17 00:44:11.472695 env[1190]: time="2025-05-17T00:44:11.472573680Z" level=info msg="TearDown network for sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" successfully" May 17 00:44:11.472771 env[1190]: time="2025-05-17T00:44:11.472687904Z" level=info msg="StopPodSandbox for \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" returns successfully" May 17 00:44:11.473425 env[1190]: time="2025-05-17T00:44:11.473368789Z" level=info msg="RemovePodSandbox for \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\"" May 17 00:44:11.473604 env[1190]: time="2025-05-17T00:44:11.473418487Z" level=info msg="Forcibly stopping sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\"" May 17 00:44:11.473754 env[1190]: time="2025-05-17T00:44:11.473717177Z" level=info msg="TearDown network for sandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" successfully" May 17 00:44:11.478484 env[1190]: time="2025-05-17T00:44:11.478305331Z" level=info msg="RemovePodSandbox \"1500c07b6b5c1c18b8aeedef83f38de6b0aba82a72221884fea8a0ed7d50ce79\" returns successfully" May 17 00:44:11.479279 env[1190]: time="2025-05-17T00:44:11.479192909Z" level=info msg="StopPodSandbox for \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\"" May 17 00:44:11.479492 env[1190]: time="2025-05-17T00:44:11.479381523Z" level=info msg="TearDown network for sandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" successfully" May 17 00:44:11.479492 env[1190]: time="2025-05-17T00:44:11.479441403Z" level=info msg="StopPodSandbox for \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" returns successfully" May 17 00:44:11.479994 env[1190]: time="2025-05-17T00:44:11.479949290Z" level=info msg="RemovePodSandbox for \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\"" May 17 00:44:11.480136 env[1190]: time="2025-05-17T00:44:11.480011038Z" level=info msg="Forcibly stopping sandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\"" May 17 00:44:11.480136 env[1190]: time="2025-05-17T00:44:11.480116065Z" level=info msg="TearDown network for sandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" successfully" May 17 00:44:11.483875 env[1190]: time="2025-05-17T00:44:11.483772344Z" level=info msg="RemovePodSandbox \"e23436e85c6179fa99f5dadca43586b72aa52c756125aaef1ec0b50a02c249a0\" returns successfully" May 17 00:44:11.957580 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:44:12.146647 kubelet[1897]: E0517 00:44:12.146547 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:12.181370 kubelet[1897]: I0517 00:44:12.181193 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2nwq6" podStartSLOduration=5.18117254 podStartE2EDuration="5.18117254s" podCreationTimestamp="2025-05-17 00:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:44:12.176842924 +0000 UTC m=+121.014262992" watchObservedRunningTime="2025-05-17 00:44:12.18117254 +0000 UTC m=+121.018592820" May 17 00:44:12.577926 kubelet[1897]: E0517 00:44:12.577865 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:13.541937 kubelet[1897]: E0517 00:44:13.541891 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:14.527544 kubelet[1897]: W0517 00:44:14.525852 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d11182_0500_4ead_8b8c_c684215c978f.slice/cri-containerd-e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f.scope WatchSource:0}: task e7dc0b3703fed02e432bc5cda05aec74b1cea2dc019332a2d77f908ee5eab51f not found May 17 00:44:15.458782 systemd-networkd[1010]: lxc_health: Link UP May 17 00:44:15.470060 systemd-networkd[1010]: lxc_health: Gained carrier May 17 00:44:15.470540 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:44:15.545135 kubelet[1897]: E0517 00:44:15.544537 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:16.154587 kubelet[1897]: E0517 00:44:16.154502 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:17.156751 kubelet[1897]: E0517 00:44:17.156654 1897 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 17 00:44:17.269917 systemd-networkd[1010]: lxc_health: Gained IPv6LL May 17 00:44:17.639969 kubelet[1897]: W0517 00:44:17.639913 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d11182_0500_4ead_8b8c_c684215c978f.slice/cri-containerd-11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e.scope WatchSource:0}: task 11e26e11ae55693f2a6c593b0453798b28800a57e53f7eb057d41c2cdb153c3e not found May 17 00:44:20.225270 systemd[1]: run-containerd-runc-k8s.io-74b7483f0f1f02590532052e4de463c731db23c8addbeaf8fe756511e719eeb8-runc.GoUMVd.mount: Deactivated successfully. May 17 00:44:20.750541 kubelet[1897]: W0517 00:44:20.750463 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17d11182_0500_4ead_8b8c_c684215c978f.slice/cri-containerd-554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372.scope WatchSource:0}: task 554405ddac9893207abd586bd974e6369b021b56e9ea037a70578a706d25a372 not found May 17 00:44:22.502398 sshd[3699]: pam_unix(sshd:session): session closed for user core May 17 00:44:22.508184 systemd[1]: sshd@28-64.23.137.34:22-147.75.109.163:37528.service: Deactivated successfully. May 17 00:44:22.509729 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:44:22.511791 systemd-logind[1179]: Session 28 logged out. Waiting for processes to exit. May 17 00:44:22.514545 systemd-logind[1179]: Removed session 28.