Mar 17 18:46:02.567601 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:46:02.567655 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:46:02.567674 kernel: BIOS-provided physical RAM map: Mar 17 18:46:02.567683 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:46:02.567692 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:46:02.567700 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:46:02.567777 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 18:46:02.567787 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 18:46:02.567800 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:46:02.567810 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:46:02.567820 kernel: NX (Execute Disable) protection: active Mar 17 18:46:02.567829 kernel: SMBIOS 2.8 present. Mar 17 18:46:02.567839 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 18:46:02.567848 kernel: Hypervisor detected: KVM Mar 17 18:46:02.567859 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:46:02.567873 kernel: kvm-clock: cpu 0, msr 1819a001, primary cpu clock Mar 17 18:46:02.567883 kernel: kvm-clock: using sched offset of 4768649480 cycles Mar 17 18:46:02.567894 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:46:02.567905 kernel: tsc: Detected 2494.170 MHz processor Mar 17 18:46:02.567915 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:46:02.567926 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:46:02.567935 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 18:46:02.567957 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:46:02.567971 kernel: ACPI: Early table checksum verification disabled Mar 17 18:46:02.567981 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 18:46:02.567992 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568003 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568013 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568023 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 18:46:02.568034 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568050 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568061 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568076 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:46:02.568087 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 18:46:02.568098 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 18:46:02.568109 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 18:46:02.568120 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 18:46:02.568148 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 18:46:02.568158 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 18:46:02.568169 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 18:46:02.568191 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 18:46:02.568203 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 18:46:02.568214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 18:46:02.568226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 18:46:02.568258 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 18:46:02.568269 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 18:46:02.568286 kernel: Zone ranges: Mar 17 18:46:02.568297 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:46:02.568309 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 18:46:02.568321 kernel: Normal empty Mar 17 18:46:02.568332 kernel: Movable zone start for each node Mar 17 18:46:02.568343 kernel: Early memory node ranges Mar 17 18:46:02.568355 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:46:02.568366 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 18:46:02.568377 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 18:46:02.568395 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:46:02.568410 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:46:02.568421 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 18:46:02.568433 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:46:02.568444 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:46:02.568455 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:46:02.568467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:46:02.568479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:46:02.568491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:46:02.568507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:46:02.568519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:46:02.568530 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:46:02.568541 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:46:02.568553 kernel: TSC deadline timer available Mar 17 18:46:02.568564 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:46:02.568575 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 18:46:02.568587 kernel: Booting paravirtualized kernel on KVM Mar 17 18:46:02.568598 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:46:02.568615 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:46:02.568626 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 18:46:02.568638 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 18:46:02.568652 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:46:02.568664 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Mar 17 18:46:02.568675 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 18:46:02.568686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 18:46:02.568697 kernel: Policy zone: DMA32 Mar 17 18:46:02.568711 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:46:02.568729 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:46:02.568757 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:46:02.568769 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 18:46:02.568780 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:46:02.568792 kernel: Memory: 1973276K/2096612K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 123076K reserved, 0K cma-reserved) Mar 17 18:46:02.568804 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:46:02.568816 kernel: Kernel/User page tables isolation: enabled Mar 17 18:46:02.568828 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:46:02.568845 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:46:02.568856 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:46:02.568869 kernel: rcu: RCU event tracing is enabled. Mar 17 18:46:02.568881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:46:02.568892 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:46:02.568904 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:46:02.568915 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:46:02.568927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:46:02.568940 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:46:02.568956 kernel: random: crng init done Mar 17 18:46:02.568968 kernel: Console: colour VGA+ 80x25 Mar 17 18:46:02.568990 kernel: printk: console [tty0] enabled Mar 17 18:46:02.569001 kernel: printk: console [ttyS0] enabled Mar 17 18:46:02.569011 kernel: ACPI: Core revision 20210730 Mar 17 18:46:02.569023 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:46:02.569034 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:46:02.569045 kernel: x2apic enabled Mar 17 18:46:02.569056 kernel: Switched APIC routing to physical x2apic. Mar 17 18:46:02.569068 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:46:02.569097 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3b633397, max_idle_ns: 440795206106 ns Mar 17 18:46:02.569108 kernel: Calibrating delay loop (skipped) preset value.. 4988.34 BogoMIPS (lpj=2494170) Mar 17 18:46:02.569122 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 18:46:02.569133 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 18:46:02.569145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:46:02.569156 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:46:02.569167 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:46:02.569179 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:46:02.569194 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 18:46:02.569233 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:46:02.569245 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:46:02.569284 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 18:46:02.569296 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 18:46:02.569308 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:46:02.569322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:46:02.569334 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:46:02.569346 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:46:02.569358 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:46:02.569375 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:46:02.569403 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:46:02.569414 kernel: LSM: Security Framework initializing Mar 17 18:46:02.569427 kernel: SELinux: Initializing. Mar 17 18:46:02.569438 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:46:02.569451 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 18:46:02.569462 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 18:46:02.569503 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 18:46:02.569515 kernel: signal: max sigframe size: 1776 Mar 17 18:46:02.569526 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:46:02.569538 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 18:46:02.569550 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:46:02.569562 kernel: x86: Booting SMP configuration: Mar 17 18:46:02.569574 kernel: .... node #0, CPUs: #1 Mar 17 18:46:02.569585 kernel: kvm-clock: cpu 1, msr 1819a041, secondary cpu clock Mar 17 18:46:02.569597 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Mar 17 18:46:02.569621 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:46:02.569633 kernel: smpboot: Max logical packages: 1 Mar 17 18:46:02.569645 kernel: smpboot: Total of 2 processors activated (9976.68 BogoMIPS) Mar 17 18:46:02.569657 kernel: devtmpfs: initialized Mar 17 18:46:02.569668 kernel: x86/mm: Memory block size: 128MB Mar 17 18:46:02.569681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:46:02.569693 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:46:02.569706 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:46:02.569717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:46:02.569735 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:46:02.569747 kernel: audit: type=2000 audit(1742237160.300:1): state=initialized audit_enabled=0 res=1 Mar 17 18:46:02.569759 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:46:02.569771 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:46:02.569783 kernel: cpuidle: using governor menu Mar 17 18:46:02.569795 kernel: ACPI: bus type PCI registered Mar 17 18:46:02.569808 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:46:02.569820 kernel: dca service started, version 1.12.1 Mar 17 18:46:02.569831 kernel: PCI: Using configuration type 1 for base access Mar 17 18:46:02.569848 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:46:02.569860 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:46:02.569872 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:46:02.569884 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:46:02.569896 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:46:02.569908 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:46:02.569919 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:46:02.569931 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:46:02.569948 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:46:02.569965 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:46:02.569977 kernel: ACPI: Interpreter enabled Mar 17 18:46:02.569989 kernel: ACPI: PM: (supports S0 S5) Mar 17 18:46:02.570015 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:46:02.570028 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:46:02.570040 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 18:46:02.570052 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:46:02.573629 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:46:02.573888 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 18:46:02.573916 kernel: acpiphp: Slot [3] registered Mar 17 18:46:02.573931 kernel: acpiphp: Slot [4] registered Mar 17 18:46:02.573948 kernel: acpiphp: Slot [5] registered Mar 17 18:46:02.573964 kernel: acpiphp: Slot [6] registered Mar 17 18:46:02.573981 kernel: acpiphp: Slot [7] registered Mar 17 18:46:02.573998 kernel: acpiphp: Slot [8] registered Mar 17 18:46:02.574012 kernel: acpiphp: Slot [9] registered Mar 17 18:46:02.574027 kernel: acpiphp: Slot [10] registered Mar 17 18:46:02.574054 kernel: acpiphp: Slot [11] registered Mar 17 18:46:02.574072 kernel: acpiphp: Slot [12] registered Mar 17 18:46:02.574090 kernel: acpiphp: Slot [13] registered Mar 17 18:46:02.574107 kernel: acpiphp: Slot [14] registered Mar 17 18:46:02.574123 kernel: acpiphp: Slot [15] registered Mar 17 18:46:02.574139 kernel: acpiphp: Slot [16] registered Mar 17 18:46:02.574155 kernel: acpiphp: Slot [17] registered Mar 17 18:46:02.574170 kernel: acpiphp: Slot [18] registered Mar 17 18:46:02.574186 kernel: acpiphp: Slot [19] registered Mar 17 18:46:02.576415 kernel: acpiphp: Slot [20] registered Mar 17 18:46:02.576443 kernel: acpiphp: Slot [21] registered Mar 17 18:46:02.576457 kernel: acpiphp: Slot [22] registered Mar 17 18:46:02.576468 kernel: acpiphp: Slot [23] registered Mar 17 18:46:02.576481 kernel: acpiphp: Slot [24] registered Mar 17 18:46:02.576494 kernel: acpiphp: Slot [25] registered Mar 17 18:46:02.576599 kernel: acpiphp: Slot [26] registered Mar 17 18:46:02.576613 kernel: acpiphp: Slot [27] registered Mar 17 18:46:02.576627 kernel: acpiphp: Slot [28] registered Mar 17 18:46:02.576640 kernel: acpiphp: Slot [29] registered Mar 17 18:46:02.576664 kernel: acpiphp: Slot [30] registered Mar 17 18:46:02.576678 kernel: acpiphp: Slot [31] registered Mar 17 18:46:02.576692 kernel: PCI host bridge to bus 0000:00 Mar 17 18:46:02.577845 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:46:02.578044 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:46:02.578176 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:46:02.578363 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 18:46:02.578549 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 18:46:02.578688 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:46:02.578888 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 18:46:02.579071 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 18:46:02.579273 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 18:46:02.579438 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 18:46:02.579594 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 18:46:02.579739 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 18:46:02.579882 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 18:46:02.580035 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 18:46:02.583542 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 18:46:02.584287 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 18:46:02.584683 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 18:46:02.584897 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 18:46:02.585482 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 18:46:02.585759 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 18:46:02.585994 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 18:46:02.586247 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 18:46:02.586434 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 18:46:02.586641 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 18:46:02.586941 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:46:02.587187 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:46:02.591655 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 18:46:02.591868 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 18:46:02.592019 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 18:46:02.594395 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:46:02.594930 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 18:46:02.595152 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 18:46:02.595509 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 18:46:02.595690 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 18:46:02.595838 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 18:46:02.595977 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 18:46:02.596147 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 18:46:02.596360 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:46:02.596509 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:46:02.596664 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 18:46:02.596835 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 18:46:02.597016 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:46:02.597174 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 18:46:02.599578 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 18:46:02.599836 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 18:46:02.600060 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 18:46:02.602356 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 18:46:02.602711 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 18:46:02.602748 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:46:02.602762 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:46:02.602775 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:46:02.602804 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:46:02.602817 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 18:46:02.602830 kernel: iommu: Default domain type: Translated Mar 17 18:46:02.602843 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:46:02.603028 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 18:46:02.603202 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:46:02.603425 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 18:46:02.603451 kernel: vgaarb: loaded Mar 17 18:46:02.603465 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:46:02.603491 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:46:02.603504 kernel: PTP clock support registered Mar 17 18:46:02.603517 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:46:02.603530 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:46:02.603543 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:46:02.603558 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 18:46:02.603570 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:46:02.603583 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:46:02.603602 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:46:02.603615 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:46:02.603629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:46:02.603643 kernel: pnp: PnP ACPI init Mar 17 18:46:02.603655 kernel: pnp: PnP ACPI: found 4 devices Mar 17 18:46:02.603668 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:46:02.603681 kernel: NET: Registered PF_INET protocol family Mar 17 18:46:02.603695 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:46:02.603709 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 18:46:02.603728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:46:02.603741 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 18:46:02.603755 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 18:46:02.603768 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 18:46:02.603781 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:46:02.603795 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 18:46:02.603809 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:46:02.603825 kernel: NET: Registered PF_XDP protocol family Mar 17 18:46:02.604015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:46:02.604159 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:46:02.606562 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:46:02.606767 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 18:46:02.606913 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 18:46:02.607129 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 18:46:02.607375 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:46:02.607561 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 17 18:46:02.607583 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 18:46:02.607782 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 53312 usecs Mar 17 18:46:02.607806 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:46:02.607821 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 18:46:02.607836 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3b633397, max_idle_ns: 440795206106 ns Mar 17 18:46:02.607849 kernel: Initialise system trusted keyrings Mar 17 18:46:02.607864 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 18:46:02.607879 kernel: Key type asymmetric registered Mar 17 18:46:02.607894 kernel: Asymmetric key parser 'x509' registered Mar 17 18:46:02.607907 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:46:02.607931 kernel: io scheduler mq-deadline registered Mar 17 18:46:02.607944 kernel: io scheduler kyber registered Mar 17 18:46:02.607957 kernel: io scheduler bfq registered Mar 17 18:46:02.607971 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:46:02.607984 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 18:46:02.607998 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 18:46:02.608011 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 18:46:02.608027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:46:02.608041 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:46:02.608063 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:46:02.608077 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:46:02.608091 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:46:02.608106 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:46:02.611637 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 18:46:02.611873 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 18:46:02.612052 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T18:46:01 UTC (1742237161) Mar 17 18:46:02.612207 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 18:46:02.612325 kernel: intel_pstate: CPU model not supported Mar 17 18:46:02.612339 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:46:02.612353 kernel: Segment Routing with IPv6 Mar 17 18:46:02.612366 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:46:02.612380 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:46:02.612393 kernel: Key type dns_resolver registered Mar 17 18:46:02.612409 kernel: IPI shorthand broadcast: enabled Mar 17 18:46:02.612422 kernel: sched_clock: Marking stable (948315603, 124238328)->(1365634741, -293080810) Mar 17 18:46:02.612438 kernel: registered taskstats version 1 Mar 17 18:46:02.612462 kernel: Loading compiled-in X.509 certificates Mar 17 18:46:02.612474 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:46:02.612487 kernel: Key type .fscrypt registered Mar 17 18:46:02.612499 kernel: Key type fscrypt-provisioning registered Mar 17 18:46:02.612513 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:46:02.612527 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:46:02.612541 kernel: ima: No architecture policies found Mar 17 18:46:02.612554 kernel: clk: Disabling unused clocks Mar 17 18:46:02.612572 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:46:02.612584 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:46:02.612597 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:46:02.612613 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:46:02.612626 kernel: Run /init as init process Mar 17 18:46:02.612638 kernel: with arguments: Mar 17 18:46:02.612685 kernel: /init Mar 17 18:46:02.612703 kernel: with environment: Mar 17 18:46:02.612717 kernel: HOME=/ Mar 17 18:46:02.612738 kernel: TERM=linux Mar 17 18:46:02.612751 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:46:02.612776 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:46:02.612796 systemd[1]: Detected virtualization kvm. Mar 17 18:46:02.612812 systemd[1]: Detected architecture x86-64. Mar 17 18:46:02.612826 systemd[1]: Running in initrd. Mar 17 18:46:02.612840 systemd[1]: No hostname configured, using default hostname. Mar 17 18:46:02.612856 systemd[1]: Hostname set to . Mar 17 18:46:02.612879 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:46:02.612894 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:46:02.612935 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:46:02.612949 systemd[1]: Reached target cryptsetup.target. Mar 17 18:46:02.612964 systemd[1]: Reached target paths.target. Mar 17 18:46:02.612978 systemd[1]: Reached target slices.target. Mar 17 18:46:02.612993 systemd[1]: Reached target swap.target. Mar 17 18:46:02.613007 systemd[1]: Reached target timers.target. Mar 17 18:46:02.613032 systemd[1]: Listening on iscsid.socket. Mar 17 18:46:02.613048 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:46:02.613062 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:46:02.628393 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:46:02.628443 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:46:02.628460 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:46:02.628477 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:46:02.628495 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:46:02.628528 systemd[1]: Reached target sockets.target. Mar 17 18:46:02.628547 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:46:02.628568 systemd[1]: Finished network-cleanup.service. Mar 17 18:46:02.628585 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:46:02.628602 systemd[1]: Starting systemd-journald.service... Mar 17 18:46:02.628625 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:46:02.628641 systemd[1]: Starting systemd-resolved.service... Mar 17 18:46:02.628658 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:46:02.628674 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:46:02.628688 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:46:02.628701 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:46:02.628729 systemd-journald[184]: Journal started Mar 17 18:46:02.629033 systemd-journald[184]: Runtime Journal (/run/log/journal/80521a9e2638412bb81f5b33b28bc633) is 4.9M, max 39.5M, 34.5M free. Mar 17 18:46:02.557759 systemd-modules-load[185]: Inserted module 'overlay' Mar 17 18:46:02.654140 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:46:02.616996 systemd-resolved[186]: Positive Trust Anchors: Mar 17 18:46:02.656335 systemd[1]: Started systemd-journald.service. Mar 17 18:46:02.617011 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:46:02.667916 kernel: Bridge firewalling registered Mar 17 18:46:02.667957 kernel: audit: type=1130 audit(1742237162.658:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.667973 kernel: audit: type=1130 audit(1742237162.658:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.617061 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:46:02.682538 kernel: audit: type=1130 audit(1742237162.659:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.682650 kernel: audit: type=1130 audit(1742237162.673:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.630420 systemd-resolved[186]: Defaulting to hostname 'linux'. Mar 17 18:46:02.656741 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 17 18:46:02.686151 kernel: SCSI subsystem initialized Mar 17 18:46:02.658577 systemd[1]: Started systemd-resolved.service. Mar 17 18:46:02.659509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:46:02.660111 systemd[1]: Reached target nss-lookup.target. Mar 17 18:46:02.672977 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:46:02.675850 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:46:02.723053 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:46:02.723173 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:46:02.725264 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:46:02.726533 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:46:02.741340 kernel: audit: type=1130 audit(1742237162.726:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.741401 kernel: audit: type=1130 audit(1742237162.736:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.730011 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:46:02.733892 systemd-modules-load[185]: Inserted module 'dm_multipath' Mar 17 18:46:02.736295 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:46:02.744898 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:46:02.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.780374 dracut-cmdline[202]: dracut-dracut-053 Mar 17 18:46:02.787842 kernel: audit: type=1130 audit(1742237162.778:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:02.775332 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:46:02.790048 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:46:02.985193 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:46:03.024559 kernel: iscsi: registered transport (tcp) Mar 17 18:46:03.059818 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:46:03.059949 kernel: QLogic iSCSI HBA Driver Mar 17 18:46:03.202040 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:46:03.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:03.204959 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:46:03.209653 kernel: audit: type=1130 audit(1742237163.202:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:03.311418 kernel: raid6: avx2x4 gen() 14299 MB/s Mar 17 18:46:03.331737 kernel: raid6: avx2x4 xor() 5321 MB/s Mar 17 18:46:03.348325 kernel: raid6: avx2x2 gen() 13800 MB/s Mar 17 18:46:03.393371 kernel: raid6: avx2x2 xor() 4952 MB/s Mar 17 18:46:03.397421 kernel: raid6: avx2x1 gen() 10078 MB/s Mar 17 18:46:03.414307 kernel: raid6: avx2x1 xor() 10454 MB/s Mar 17 18:46:03.431322 kernel: raid6: sse2x4 gen() 7075 MB/s Mar 17 18:46:03.460346 kernel: raid6: sse2x4 xor() 4399 MB/s Mar 17 18:46:03.505281 kernel: raid6: sse2x2 gen() 7047 MB/s Mar 17 18:46:03.505418 kernel: raid6: sse2x2 xor() 5141 MB/s Mar 17 18:46:03.541174 kernel: raid6: sse2x1 gen() 5261 MB/s Mar 17 18:46:03.559111 kernel: raid6: sse2x1 xor() 4604 MB/s Mar 17 18:46:03.559280 kernel: raid6: using algorithm avx2x4 gen() 14299 MB/s Mar 17 18:46:03.559344 kernel: raid6: .... xor() 5321 MB/s, rmw enabled Mar 17 18:46:03.563438 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:46:03.584700 kernel: xor: automatically using best checksumming function avx Mar 17 18:46:03.746357 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:46:03.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:03.772246 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:46:03.777000 audit: BPF prog-id=7 op=LOAD Mar 17 18:46:03.778322 kernel: audit: type=1130 audit(1742237163.772:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:03.778000 audit: BPF prog-id=8 op=LOAD Mar 17 18:46:03.779898 systemd[1]: Starting systemd-udevd.service... Mar 17 18:46:03.822256 systemd-udevd[385]: Using default interface naming scheme 'v252'. Mar 17 18:46:03.832462 systemd[1]: Started systemd-udevd.service. Mar 17 18:46:03.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:03.839947 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:46:03.883704 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Mar 17 18:46:03.990908 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:46:03.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:03.993622 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:46:04.079207 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:46:04.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:04.223183 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 18:46:04.282517 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:46:04.282557 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:46:04.282590 kernel: GPT:9289727 != 125829119 Mar 17 18:46:04.282725 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:46:04.282747 kernel: GPT:9289727 != 125829119 Mar 17 18:46:04.282767 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:46:04.282787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:46:04.282807 kernel: scsi host0: Virtio SCSI HBA Mar 17 18:46:04.299747 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Mar 17 18:46:04.332964 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:46:04.333018 kernel: AES CTR mode by8 optimization enabled Mar 17 18:46:04.355188 kernel: ACPI: bus type USB registered Mar 17 18:46:04.355417 kernel: usbcore: registered new interface driver usbfs Mar 17 18:46:04.356270 kernel: usbcore: registered new interface driver hub Mar 17 18:46:04.356334 kernel: usbcore: registered new device driver usb Mar 17 18:46:04.368255 kernel: libata version 3.00 loaded. Mar 17 18:46:04.392440 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:46:04.484557 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (432) Mar 17 18:46:04.484608 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 18:46:04.484954 kernel: scsi host1: ata_piix Mar 17 18:46:04.485431 kernel: scsi host2: ata_piix Mar 17 18:46:04.485648 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 18:46:04.485672 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 18:46:04.485691 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Mar 17 18:46:04.483850 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:46:04.494445 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:46:04.533492 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:46:04.539453 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:46:04.543972 systemd[1]: Starting disk-uuid.service... Mar 17 18:46:04.558397 disk-uuid[504]: Primary Header is updated. Mar 17 18:46:04.558397 disk-uuid[504]: Secondary Entries is updated. Mar 17 18:46:04.558397 disk-uuid[504]: Secondary Header is updated. Mar 17 18:46:04.576275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:46:04.603283 kernel: ehci-pci: EHCI PCI platform driver Mar 17 18:46:04.635259 kernel: uhci_hcd: USB Universal Host Controller Interface driver Mar 17 18:46:04.701474 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 18:46:04.705960 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 18:46:04.706363 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 18:46:04.706566 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Mar 17 18:46:04.706727 kernel: hub 1-0:1.0: USB hub found Mar 17 18:46:04.706961 kernel: hub 1-0:1.0: 2 ports detected Mar 17 18:46:05.603792 disk-uuid[505]: The operation has completed successfully. Mar 17 18:46:05.604828 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:46:05.732719 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:46:05.734418 systemd[1]: Finished disk-uuid.service. Mar 17 18:46:05.736803 kernel: kauditd_printk_skb: 5 callbacks suppressed Mar 17 18:46:05.736915 kernel: audit: type=1130 audit(1742237165.735:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:05.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:05.742789 kernel: audit: type=1131 audit(1742237165.736:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:05.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:05.743466 systemd[1]: Starting verity-setup.service... Mar 17 18:46:05.791287 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 18:46:05.974494 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:46:05.981694 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:46:05.988043 systemd[1]: Finished verity-setup.service. Mar 17 18:46:05.993607 kernel: audit: type=1130 audit(1742237165.988:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:05.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.150294 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:46:06.152564 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:46:06.154888 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:46:06.159073 systemd[1]: Starting ignition-setup.service... Mar 17 18:46:06.162583 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:46:06.203751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:46:06.203875 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:46:06.203897 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:46:06.235723 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:46:06.259189 systemd[1]: Finished ignition-setup.service. Mar 17 18:46:06.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.263894 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:46:06.265397 kernel: audit: type=1130 audit(1742237166.259:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.472660 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:46:06.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.479259 kernel: audit: type=1130 audit(1742237166.474:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.479628 kernel: audit: type=1334 audit(1742237166.475:21): prog-id=9 op=LOAD Mar 17 18:46:06.475000 audit: BPF prog-id=9 op=LOAD Mar 17 18:46:06.484039 systemd[1]: Starting systemd-networkd.service... Mar 17 18:46:06.575782 systemd-networkd[691]: lo: Link UP Mar 17 18:46:06.575802 systemd-networkd[691]: lo: Gained carrier Mar 17 18:46:06.578726 systemd-networkd[691]: Enumeration completed Mar 17 18:46:06.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.581081 systemd[1]: Started systemd-networkd.service. Mar 17 18:46:06.624181 kernel: audit: type=1130 audit(1742237166.581:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.581754 systemd-networkd[691]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:46:06.582496 systemd[1]: Reached target network.target. Mar 17 18:46:06.589604 systemd-networkd[691]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 18:46:06.634588 systemd-networkd[691]: eth1: Link UP Mar 17 18:46:06.634598 systemd-networkd[691]: eth1: Gained carrier Mar 17 18:46:06.635073 systemd[1]: Starting iscsiuio.service... Mar 17 18:46:06.640022 systemd-networkd[691]: eth0: Link UP Mar 17 18:46:06.640031 systemd-networkd[691]: eth0: Gained carrier Mar 17 18:46:06.659664 systemd[1]: Started iscsiuio.service. Mar 17 18:46:06.666485 kernel: audit: type=1130 audit(1742237166.660:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.664140 systemd-networkd[691]: eth0: DHCPv4 address 134.199.210.138/20, gateway 134.199.208.1 acquired from 169.254.169.253 Mar 17 18:46:06.670267 systemd[1]: Starting iscsid.service... Mar 17 18:46:06.677801 systemd-networkd[691]: eth1: DHCPv4 address 10.124.0.23/20 acquired from 169.254.169.253 Mar 17 18:46:06.681729 iscsid[696]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:46:06.681729 iscsid[696]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:46:06.681729 iscsid[696]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:46:06.681729 iscsid[696]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:46:06.681729 iscsid[696]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:46:06.681729 iscsid[696]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:46:06.714428 kernel: audit: type=1130 audit(1742237166.685:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.685134 systemd[1]: Started iscsid.service. Mar 17 18:46:06.696908 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:46:06.729550 ignition[616]: Ignition 2.14.0 Mar 17 18:46:06.729580 ignition[616]: Stage: fetch-offline Mar 17 18:46:06.741599 kernel: audit: type=1130 audit(1742237166.734:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.733676 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:46:06.729782 ignition[616]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:06.735111 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:46:06.729831 ignition[616]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:06.735948 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:46:06.736747 systemd[1]: Reached target remote-fs.target. Mar 17 18:46:06.743187 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:46:06.757965 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:06.758275 ignition[616]: parsed url from cmdline: "" Mar 17 18:46:06.758285 ignition[616]: no config URL provided Mar 17 18:46:06.758298 ignition[616]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:46:06.758321 ignition[616]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:46:06.760576 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:46:06.758340 ignition[616]: failed to fetch config: resource requires networking Mar 17 18:46:06.765820 ignition[616]: Ignition finished successfully Mar 17 18:46:06.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.769820 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:46:06.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.773446 systemd[1]: Starting ignition-fetch.service... Mar 17 18:46:06.802245 ignition[710]: Ignition 2.14.0 Mar 17 18:46:06.803329 ignition[710]: Stage: fetch Mar 17 18:46:06.803633 ignition[710]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:06.803674 ignition[710]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:06.806680 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:06.806862 ignition[710]: parsed url from cmdline: "" Mar 17 18:46:06.806867 ignition[710]: no config URL provided Mar 17 18:46:06.806873 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:46:06.806885 ignition[710]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:46:06.806931 ignition[710]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 18:46:06.836732 ignition[710]: GET result: OK Mar 17 18:46:06.837698 ignition[710]: parsing config with SHA512: b58ce72210f978cb6072699c8f953167e6384c492464dfed669d89e8b9e42be208f3fe3861ff5a1b3eebf93d0e7a23bfed619495f9b0cdca17faaacc046958d4 Mar 17 18:46:06.852423 unknown[710]: fetched base config from "system" Mar 17 18:46:06.852438 unknown[710]: fetched base config from "system" Mar 17 18:46:06.853367 ignition[710]: fetch: fetch complete Mar 17 18:46:06.852447 unknown[710]: fetched user config from "digitalocean" Mar 17 18:46:06.853378 ignition[710]: fetch: fetch passed Mar 17 18:46:06.855663 systemd[1]: Finished ignition-fetch.service. Mar 17 18:46:06.853503 ignition[710]: Ignition finished successfully Mar 17 18:46:06.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.860670 systemd[1]: Starting ignition-kargs.service... Mar 17 18:46:06.877391 ignition[716]: Ignition 2.14.0 Mar 17 18:46:06.881368 ignition[716]: Stage: kargs Mar 17 18:46:06.883611 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:06.883648 ignition[716]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:06.886805 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:06.890420 ignition[716]: kargs: kargs passed Mar 17 18:46:06.892372 systemd[1]: Finished ignition-kargs.service. Mar 17 18:46:06.890553 ignition[716]: Ignition finished successfully Mar 17 18:46:06.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.895660 systemd[1]: Starting ignition-disks.service... Mar 17 18:46:06.914743 ignition[722]: Ignition 2.14.0 Mar 17 18:46:06.914762 ignition[722]: Stage: disks Mar 17 18:46:06.915036 ignition[722]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:06.915070 ignition[722]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:06.919180 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:06.923047 ignition[722]: disks: disks passed Mar 17 18:46:06.923248 ignition[722]: Ignition finished successfully Mar 17 18:46:06.926455 systemd[1]: Finished ignition-disks.service. Mar 17 18:46:06.927336 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:46:06.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.927888 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:46:06.929297 systemd[1]: Reached target local-fs.target. Mar 17 18:46:06.930799 systemd[1]: Reached target sysinit.target. Mar 17 18:46:06.931404 systemd[1]: Reached target basic.target. Mar 17 18:46:06.934389 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:46:06.973783 systemd-fsck[730]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:46:06.986277 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:46:06.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:06.995112 systemd[1]: Mounting sysroot.mount... Mar 17 18:46:07.014298 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:46:07.015640 systemd[1]: Mounted sysroot.mount. Mar 17 18:46:07.016635 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:46:07.020537 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:46:07.023156 systemd[1]: Starting flatcar-digitalocean-network.service... Mar 17 18:46:07.026879 systemd[1]: Starting flatcar-metadata-hostname.service... Mar 17 18:46:07.027603 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:46:07.027686 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:46:07.039973 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:46:07.052237 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:46:07.063910 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:46:07.087133 initrd-setup-root[743]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:46:07.098133 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (738) Mar 17 18:46:07.122947 initrd-setup-root[751]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:46:07.125700 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:46:07.125747 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:46:07.125765 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:46:07.136716 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:46:07.156406 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:46:07.161566 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:46:07.305544 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:46:07.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:07.307846 systemd[1]: Starting ignition-mount.service... Mar 17 18:46:07.323867 coreos-metadata[737]: Mar 17 18:46:07.318 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:46:07.327461 systemd[1]: Starting sysroot-boot.service... Mar 17 18:46:07.345797 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:46:07.346005 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:46:07.353352 coreos-metadata[737]: Mar 17 18:46:07.353 INFO Fetch successful Mar 17 18:46:07.370740 coreos-metadata[737]: Mar 17 18:46:07.370 INFO wrote hostname ci-3510.3.7-0-797a2fde87 to /sysroot/etc/hostname Mar 17 18:46:07.372833 systemd[1]: Finished flatcar-metadata-hostname.service. Mar 17 18:46:07.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:07.378013 coreos-metadata[736]: Mar 17 18:46:07.377 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:46:07.396044 ignition[807]: INFO : Ignition 2.14.0 Mar 17 18:46:07.397310 ignition[807]: INFO : Stage: mount Mar 17 18:46:07.398317 ignition[807]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:07.399298 ignition[807]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:07.400493 coreos-metadata[736]: Mar 17 18:46:07.400 INFO Fetch successful Mar 17 18:46:07.411151 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:07.414294 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Mar 17 18:46:07.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:07.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:07.415514 systemd[1]: Finished flatcar-digitalocean-network.service. Mar 17 18:46:07.420080 ignition[807]: INFO : mount: mount passed Mar 17 18:46:07.424100 ignition[807]: INFO : Ignition finished successfully Mar 17 18:46:07.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:07.426284 systemd[1]: Finished ignition-mount.service. Mar 17 18:46:07.428830 systemd[1]: Starting ignition-files.service... Mar 17 18:46:07.444801 systemd[1]: Finished sysroot-boot.service. Mar 17 18:46:07.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:07.456362 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:46:07.481892 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Mar 17 18:46:07.487535 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:46:07.488357 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:46:07.488387 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:46:07.508457 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:46:07.535179 ignition[835]: INFO : Ignition 2.14.0 Mar 17 18:46:07.535179 ignition[835]: INFO : Stage: files Mar 17 18:46:07.537146 ignition[835]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:07.537146 ignition[835]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:07.539274 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:07.546793 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:46:07.552147 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:46:07.553598 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:46:07.559909 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:46:07.561949 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:46:07.563319 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:46:07.562827 unknown[835]: wrote ssh authorized keys file for user: core Mar 17 18:46:07.567866 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:46:07.567866 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:46:07.567866 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:46:07.567866 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:46:07.655099 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:46:08.048274 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:46:08.050181 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:46:08.051279 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:46:08.075635 systemd-networkd[691]: eth0: Gained IPv6LL Mar 17 18:46:08.140044 systemd-networkd[691]: eth1: Gained IPv6LL Mar 17 18:46:08.545253 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 18:46:08.759246 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:46:08.761001 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:46:08.762802 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:46:08.764111 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:46:08.765715 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:46:08.767624 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:46:08.767624 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:46:08.767624 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:46:08.777060 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:46:08.779552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:46:08.779552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:46:08.779552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:46:08.779552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:46:08.779552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:46:08.779552 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:46:09.244484 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 18:46:09.794704 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:46:09.796308 ignition[835]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:46:09.797269 ignition[835]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:46:09.798107 ignition[835]: INFO : files: op(e): [started] processing unit "containerd.service" Mar 17 18:46:09.800027 ignition[835]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:46:09.802333 ignition[835]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:46:09.804546 ignition[835]: INFO : files: op(e): [finished] processing unit "containerd.service" Mar 17 18:46:09.804546 ignition[835]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:46:09.807392 ignition[835]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:46:09.814827 ignition[835]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:46:09.819676 ignition[835]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:46:09.819676 ignition[835]: INFO : files: files passed Mar 17 18:46:09.819676 ignition[835]: INFO : Ignition finished successfully Mar 17 18:46:09.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.820479 systemd[1]: Finished ignition-files.service. Mar 17 18:46:09.828418 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:46:09.829262 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:46:09.831391 systemd[1]: Starting ignition-quench.service... Mar 17 18:46:09.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.840975 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:46:09.841355 systemd[1]: Finished ignition-quench.service. Mar 17 18:46:09.854523 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:46:09.855957 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:46:09.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.859658 systemd[1]: Reached target ignition-complete.target. Mar 17 18:46:09.863103 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:46:09.916167 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:46:09.917447 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:46:09.919089 systemd[1]: Reached target initrd-fs.target. Mar 17 18:46:09.920383 systemd[1]: Reached target initrd.target. Mar 17 18:46:09.921710 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:46:09.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.933144 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:46:09.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:09.994925 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:46:09.999278 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:46:10.026731 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:46:10.029290 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:46:10.031091 systemd[1]: Stopped target timers.target. Mar 17 18:46:10.033242 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:46:10.034309 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:46:10.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.036388 systemd[1]: Stopped target initrd.target. Mar 17 18:46:10.037720 systemd[1]: Stopped target basic.target. Mar 17 18:46:10.040795 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:46:10.042655 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:46:10.044336 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:46:10.045878 systemd[1]: Stopped target remote-fs.target. Mar 17 18:46:10.047274 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:46:10.049234 systemd[1]: Stopped target sysinit.target. Mar 17 18:46:10.050735 systemd[1]: Stopped target local-fs.target. Mar 17 18:46:10.052545 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:46:10.054057 systemd[1]: Stopped target swap.target. Mar 17 18:46:10.057368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:46:10.058704 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:46:10.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.067447 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:46:10.068964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:46:10.070091 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:46:10.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.078339 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:46:10.078752 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:46:10.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.079839 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:46:10.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.080028 systemd[1]: Stopped ignition-files.service. Mar 17 18:46:10.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.080662 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 18:46:10.080906 systemd[1]: Stopped flatcar-metadata-hostname.service. Mar 17 18:46:10.084096 systemd[1]: Stopping ignition-mount.service... Mar 17 18:46:10.092138 iscsid[696]: iscsid shutting down. Mar 17 18:46:10.093149 systemd[1]: Stopping iscsid.service... Mar 17 18:46:10.096581 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:46:10.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.097375 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:46:10.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.097783 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:46:10.099184 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:46:10.099537 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:46:10.106978 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:46:10.107349 systemd[1]: Stopped iscsid.service. Mar 17 18:46:10.111309 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:46:10.111521 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:46:10.119367 systemd[1]: Stopping iscsiuio.service... Mar 17 18:46:10.127904 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:46:10.140591 ignition[873]: INFO : Ignition 2.14.0 Mar 17 18:46:10.140591 ignition[873]: INFO : Stage: umount Mar 17 18:46:10.140591 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:46:10.140591 ignition[873]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Mar 17 18:46:10.140591 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 18:46:10.140591 ignition[873]: INFO : umount: umount passed Mar 17 18:46:10.140591 ignition[873]: INFO : Ignition finished successfully Mar 17 18:46:10.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.128129 systemd[1]: Stopped iscsiuio.service. Mar 17 18:46:10.151197 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:46:10.151523 systemd[1]: Stopped ignition-mount.service. Mar 17 18:46:10.152369 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:46:10.152466 systemd[1]: Stopped ignition-disks.service. Mar 17 18:46:10.153091 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:46:10.153174 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:46:10.153739 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:46:10.153806 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:46:10.154392 systemd[1]: Stopped target network.target. Mar 17 18:46:10.154860 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:46:10.155071 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:46:10.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.155631 systemd[1]: Stopped target paths.target. Mar 17 18:46:10.156187 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:46:10.158533 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:46:10.165127 systemd[1]: Stopped target slices.target. Mar 17 18:46:10.174328 systemd[1]: Stopped target sockets.target. Mar 17 18:46:10.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.179008 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:46:10.179137 systemd[1]: Closed iscsid.socket. Mar 17 18:46:10.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.179757 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:46:10.179837 systemd[1]: Closed iscsiuio.socket. Mar 17 18:46:10.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.180357 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:46:10.201000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:46:10.180457 systemd[1]: Stopped ignition-setup.service. Mar 17 18:46:10.181408 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:46:10.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.182543 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:46:10.185148 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:46:10.185496 systemd-networkd[691]: eth0: DHCPv6 lease lost Mar 17 18:46:10.189407 systemd-networkd[691]: eth1: DHCPv6 lease lost Mar 17 18:46:10.193085 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:46:10.193350 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:46:10.195800 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:46:10.196067 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:46:10.199064 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:46:10.199455 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:46:10.201435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:46:10.201735 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:46:10.202423 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:46:10.202527 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:46:10.204967 systemd[1]: Stopping network-cleanup.service... Mar 17 18:46:10.217000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:46:10.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.228962 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:46:10.229152 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:46:10.229930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:46:10.230056 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:46:10.230767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:46:10.230851 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:46:10.231596 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:46:10.234077 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:46:10.273503 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:46:10.273779 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:46:10.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.277624 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:46:10.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.277805 systemd[1]: Stopped network-cleanup.service. Mar 17 18:46:10.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.285662 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:46:10.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.285740 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:46:10.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.286454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:46:10.286516 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:46:10.300110 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:46:10.300223 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:46:10.301154 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:46:10.301262 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:46:10.302005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:46:10.302084 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:46:10.304435 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:46:10.304961 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:46:10.305065 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:46:10.306193 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:46:10.306378 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:46:10.306957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:46:10.307030 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:46:10.309657 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:46:10.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:10.328028 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:46:10.328196 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:46:10.329081 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:46:10.339016 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:46:10.358865 systemd[1]: Switching root. Mar 17 18:46:10.365000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:46:10.365000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:46:10.368000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:46:10.377000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:46:10.377000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:46:10.399487 systemd-journald[184]: Journal stopped Mar 17 18:46:16.530786 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 17 18:46:16.530945 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:46:16.530976 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:46:16.530998 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:46:16.531016 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:46:16.531040 kernel: SELinux: policy capability open_perms=1 Mar 17 18:46:16.531067 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:46:16.531097 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:46:16.531114 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:46:16.531132 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:46:16.531151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:46:16.531169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:46:16.531193 kernel: kauditd_printk_skb: 62 callbacks suppressed Mar 17 18:46:16.531233 kernel: audit: type=1403 audit(1742237170.892:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:46:16.559348 systemd[1]: Successfully loaded SELinux policy in 71.678ms. Mar 17 18:46:16.559433 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.962ms. Mar 17 18:46:16.559462 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:46:16.559483 systemd[1]: Detected virtualization kvm. Mar 17 18:46:16.559505 systemd[1]: Detected architecture x86-64. Mar 17 18:46:16.559525 systemd[1]: Detected first boot. Mar 17 18:46:16.559546 systemd[1]: Hostname set to . Mar 17 18:46:16.559573 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:46:16.559599 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:46:16.559634 kernel: audit: type=1400 audit(1742237171.194:89): avc: denied { associate } for pid=924 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:46:16.559654 kernel: audit: type=1300 audit(1742237171.194:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001896ac a1=c00002cb58 a2=c00002aa40 a3=32 items=0 ppid=906 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:46:16.559672 kernel: audit: type=1327 audit(1742237171.194:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:46:16.559688 kernel: audit: type=1400 audit(1742237171.196:90): avc: denied { associate } for pid=924 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:46:16.559705 kernel: audit: type=1300 audit(1742237171.196:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000189785 a2=1ed a3=0 items=2 ppid=906 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:46:16.559721 kernel: audit: type=1307 audit(1742237171.196:90): cwd="/" Mar 17 18:46:16.559740 kernel: audit: type=1302 audit(1742237171.196:90): item=0 name=(null) inode=2 dev=00:27 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:16.559756 kernel: audit: type=1302 audit(1742237171.196:90): item=1 name=(null) inode=3 dev=00:27 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:16.559772 kernel: audit: type=1327 audit(1742237171.196:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:46:16.559789 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:46:16.559807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:46:16.559826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:46:16.559876 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:46:16.559898 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:46:16.559915 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:46:16.559932 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:46:16.559949 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:46:16.559966 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:46:16.559983 systemd[1]: Created slice system-getty.slice. Mar 17 18:46:16.560004 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:46:16.560021 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:46:16.560038 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:46:16.560055 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:46:16.560072 systemd[1]: Created slice user.slice. Mar 17 18:46:16.560089 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:46:16.560107 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:46:16.560124 systemd[1]: Set up automount boot.automount. Mar 17 18:46:16.560141 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:46:16.560161 systemd[1]: Reached target integritysetup.target. Mar 17 18:46:16.560179 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:46:16.560196 systemd[1]: Reached target remote-fs.target. Mar 17 18:46:16.570072 systemd[1]: Reached target slices.target. Mar 17 18:46:16.570141 systemd[1]: Reached target swap.target. Mar 17 18:46:16.570164 systemd[1]: Reached target torcx.target. Mar 17 18:46:16.570185 systemd[1]: Reached target veritysetup.target. Mar 17 18:46:16.570205 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:46:16.570336 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:46:16.570364 kernel: audit: type=1400 audit(1742237176.169:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:46:16.570397 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:46:16.570419 kernel: audit: type=1335 audit(1742237176.169:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:46:16.570439 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:46:16.570458 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:46:16.570478 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:46:16.570499 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:46:16.570521 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:46:16.570542 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:46:16.570560 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:46:16.570589 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:46:16.570613 systemd[1]: Mounting media.mount... Mar 17 18:46:16.570634 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:16.570656 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:46:16.570679 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:46:16.570701 systemd[1]: Mounting tmp.mount... Mar 17 18:46:16.570720 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:46:16.570739 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:46:16.570758 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:46:16.570784 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:46:16.570803 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:46:16.570823 systemd[1]: Starting modprobe@drm.service... Mar 17 18:46:16.570841 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:46:16.570861 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:46:16.570879 systemd[1]: Starting modprobe@loop.service... Mar 17 18:46:16.570902 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:46:16.570923 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:46:16.570948 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:46:16.570969 systemd[1]: Starting systemd-journald.service... Mar 17 18:46:16.570991 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:46:16.571014 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:46:16.571035 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:46:16.571058 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:46:16.571079 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:16.571133 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:46:16.571155 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:46:16.580278 systemd[1]: Mounted media.mount. Mar 17 18:46:16.580571 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:46:16.580613 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:46:16.580643 systemd[1]: Mounted tmp.mount. Mar 17 18:46:16.580693 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:46:16.580737 kernel: audit: type=1130 audit(1742237176.392:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.580797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:46:16.580928 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:46:16.580961 kernel: audit: type=1130 audit(1742237176.408:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:46:16.581279 kernel: audit: type=1131 audit(1742237176.408:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581318 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:46:16.581343 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:46:16.581368 kernel: audit: type=1130 audit(1742237176.418:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581398 systemd[1]: Finished modprobe@drm.service. Mar 17 18:46:16.581421 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:46:16.581447 kernel: audit: type=1131 audit(1742237176.418:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581471 kernel: audit: type=1130 audit(1742237176.429:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581492 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:46:16.581515 kernel: audit: type=1131 audit(1742237176.429:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581539 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:46:16.581568 kernel: audit: type=1130 audit(1742237176.435:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.581596 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:46:16.581664 kernel: fuse: init (API version 7.34) Mar 17 18:46:16.581687 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:46:16.581711 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:46:16.581734 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:46:16.581757 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:46:16.581777 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:46:16.581796 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:46:16.581816 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:46:16.581854 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:46:16.581874 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:46:16.592352 systemd[1]: Reached target network-pre.target. Mar 17 18:46:16.592405 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:46:16.592429 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:46:16.592462 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:46:16.592486 kernel: loop: module loaded Mar 17 18:46:16.592509 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:46:16.592529 systemd[1]: Finished modprobe@loop.service. Mar 17 18:46:16.592551 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:46:16.592576 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:46:16.592599 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:46:16.592630 systemd-journald[1005]: Journal started Mar 17 18:46:16.592758 systemd-journald[1005]: Runtime Journal (/run/log/journal/80521a9e2638412bb81f5b33b28bc633) is 4.9M, max 39.5M, 34.5M free. Mar 17 18:46:16.169000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:46:16.169000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:46:16.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.512000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:46:16.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.512000 audit[1005]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff53d79f00 a2=4000 a3=7fff53d79f9c items=0 ppid=1 pid=1005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:46:16.512000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:46:16.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.598825 systemd[1]: Started systemd-journald.service. Mar 17 18:46:16.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.600013 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:46:16.624595 systemd-journald[1005]: Time spent on flushing to /var/log/journal/80521a9e2638412bb81f5b33b28bc633 is 80.707ms for 1122 entries. Mar 17 18:46:16.624595 systemd-journald[1005]: System Journal (/var/log/journal/80521a9e2638412bb81f5b33b28bc633) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:46:16.728083 systemd-journald[1005]: Received client request to flush runtime journal. Mar 17 18:46:16.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.659794 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:46:16.729745 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:46:16.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.742017 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:46:16.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.749344 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:46:16.811726 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:46:16.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.824471 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:46:16.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.832138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:46:16.837679 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:46:16.861754 udevadm[1068]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:46:16.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:16.947595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:46:18.088610 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:46:18.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.099607 systemd[1]: Starting systemd-udevd.service... Mar 17 18:46:18.145860 systemd-udevd[1071]: Using default interface naming scheme 'v252'. Mar 17 18:46:18.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.190105 systemd[1]: Started systemd-udevd.service. Mar 17 18:46:18.195898 systemd[1]: Starting systemd-networkd.service... Mar 17 18:46:18.214407 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:46:18.292431 systemd[1]: Found device dev-ttyS0.device. Mar 17 18:46:18.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.327591 systemd[1]: Started systemd-userdbd.service. Mar 17 18:46:18.373322 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:18.373801 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:46:18.379499 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:46:18.382091 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:46:18.391008 systemd[1]: Starting modprobe@loop.service... Mar 17 18:46:18.393650 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:46:18.393810 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:46:18.393969 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:18.394933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:46:18.395404 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:46:18.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.403570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:46:18.405534 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:46:18.406456 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:46:18.407787 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:46:18.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.436282 systemd[1]: Finished modprobe@loop.service. Mar 17 18:46:18.437596 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:46:18.540265 systemd-networkd[1080]: lo: Link UP Mar 17 18:46:18.540822 systemd-networkd[1080]: lo: Gained carrier Mar 17 18:46:18.541825 systemd-networkd[1080]: Enumeration completed Mar 17 18:46:18.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.542208 systemd[1]: Started systemd-networkd.service. Mar 17 18:46:18.543397 systemd-networkd[1080]: eth1: Configuring with /run/systemd/network/10-6e:8e:88:3a:be:5c.network. Mar 17 18:46:18.545249 systemd-networkd[1080]: eth0: Configuring with /run/systemd/network/10-f2:f6:58:75:b8:8b.network. Mar 17 18:46:18.546474 systemd-networkd[1080]: eth1: Link UP Mar 17 18:46:18.546606 systemd-networkd[1080]: eth1: Gained carrier Mar 17 18:46:18.551628 systemd-networkd[1080]: eth0: Link UP Mar 17 18:46:18.551642 systemd-networkd[1080]: eth0: Gained carrier Mar 17 18:46:18.544000 audit[1082]: AVC avc: denied { confidentiality } for pid=1082 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:46:18.566382 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:46:18.565628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:46:18.544000 audit[1082]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5597c02d7280 a1=338ac a2=7f70d9319bc5 a3=5 items=110 ppid=1071 pid=1082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:46:18.544000 audit: CWD cwd="/" Mar 17 18:46:18.544000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=1 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=2 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=3 name=(null) inode=13286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=4 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=5 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=6 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=7 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=8 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=9 name=(null) inode=13289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=10 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=11 name=(null) inode=13290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=12 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=13 name=(null) inode=13291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=14 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=15 name=(null) inode=13292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=16 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=17 name=(null) inode=13293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=18 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=19 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=20 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=21 name=(null) inode=13295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=22 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=23 name=(null) inode=13296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=24 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=25 name=(null) inode=13297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=26 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=27 name=(null) inode=13298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=28 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=29 name=(null) inode=13299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=30 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=31 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=32 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=33 name=(null) inode=13301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=34 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=35 name=(null) inode=13302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=36 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=37 name=(null) inode=13303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=38 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=39 name=(null) inode=13304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=40 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=41 name=(null) inode=13305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=42 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=43 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=44 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=45 name=(null) inode=13307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=46 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=47 name=(null) inode=13308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=48 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=49 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=50 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=51 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=52 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=53 name=(null) inode=13311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=55 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=56 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=57 name=(null) inode=14337 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=58 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=59 name=(null) inode=14338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=60 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=61 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=62 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=63 name=(null) inode=14340 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=64 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=65 name=(null) inode=14341 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=66 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=67 name=(null) inode=14342 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=68 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=69 name=(null) inode=14343 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=70 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=71 name=(null) inode=14344 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=72 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=73 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=74 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=75 name=(null) inode=14346 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=76 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=77 name=(null) inode=14347 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=78 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=79 name=(null) inode=14348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=80 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=81 name=(null) inode=14349 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=82 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=83 name=(null) inode=14350 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=84 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=85 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=86 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=87 name=(null) inode=14352 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=88 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=89 name=(null) inode=14353 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=90 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=91 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=92 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=93 name=(null) inode=14355 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=94 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=95 name=(null) inode=14356 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=96 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=97 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=98 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=99 name=(null) inode=14358 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=100 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=101 name=(null) inode=14359 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=102 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=103 name=(null) inode=14360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=104 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=105 name=(null) inode=14361 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=106 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=107 name=(null) inode=14362 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PATH item=109 name=(null) inode=14363 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:46:18.544000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:46:18.588245 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:46:18.590295 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 18:46:18.695407 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:46:18.723269 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:46:18.887547 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:46:18.919175 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:46:18.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:18.922354 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:46:18.970759 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:46:19.016151 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:46:19.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.019415 systemd[1]: Reached target cryptsetup.target. Mar 17 18:46:19.022525 systemd[1]: Starting lvm2-activation.service... Mar 17 18:46:19.032538 lvm[1116]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:46:19.071177 systemd[1]: Finished lvm2-activation.service. Mar 17 18:46:19.071872 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:46:19.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.075417 systemd[1]: Mounting media-configdrive.mount... Mar 17 18:46:19.075978 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:46:19.076055 systemd[1]: Reached target machines.target. Mar 17 18:46:19.078997 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:46:19.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.116249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:46:19.134317 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 18:46:19.139288 systemd[1]: Mounted media-configdrive.mount. Mar 17 18:46:19.140018 systemd[1]: Reached target local-fs.target. Mar 17 18:46:19.143127 systemd[1]: Starting ldconfig.service... Mar 17 18:46:19.144670 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:46:19.144811 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:19.147779 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:46:19.150972 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:46:19.154112 systemd[1]: Starting systemd-sysext.service... Mar 17 18:46:19.174881 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1126 (bootctl) Mar 17 18:46:19.179439 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:46:19.193713 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:46:19.225353 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:46:19.225742 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:46:19.244967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:46:19.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.247757 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:46:19.271250 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:46:19.323562 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:46:19.353341 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:46:19.394877 (sd-sysext)[1139]: Using extensions 'kubernetes'. Mar 17 18:46:19.396172 (sd-sysext)[1139]: Merged extensions into '/usr'. Mar 17 18:46:19.451404 systemd-fsck[1136]: fsck.fat 4.2 (2021-01-31) Mar 17 18:46:19.451404 systemd-fsck[1136]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 18:46:19.452863 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:19.463878 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:46:19.465611 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:46:19.473372 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:46:19.480022 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:46:19.492105 systemd[1]: Starting modprobe@loop.service... Mar 17 18:46:19.492840 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:46:19.493112 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:19.493391 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:19.516253 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:46:19.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.517741 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:46:19.520298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:46:19.520594 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:46:19.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.524415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:46:19.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.524842 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:46:19.528794 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:46:19.530393 systemd[1]: Finished modprobe@loop.service. Mar 17 18:46:19.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.539898 systemd[1]: Finished systemd-sysext.service. Mar 17 18:46:19.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:19.549854 systemd[1]: Mounting boot.mount... Mar 17 18:46:19.556946 systemd[1]: Starting ensure-sysext.service... Mar 17 18:46:19.560275 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:46:19.560459 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:46:19.562759 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:46:19.581052 systemd[1]: Reloading. Mar 17 18:46:19.613611 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:46:19.631882 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:46:19.641339 systemd-tmpfiles[1157]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:46:19.723582 systemd-networkd[1080]: eth0: Gained IPv6LL Mar 17 18:46:19.797991 /usr/lib/systemd/system-generators/torcx-generator[1177]: time="2025-03-17T18:46:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:46:19.799826 /usr/lib/systemd/system-generators/torcx-generator[1177]: time="2025-03-17T18:46:19Z" level=info msg="torcx already run" Mar 17 18:46:19.982697 systemd-networkd[1080]: eth1: Gained IPv6LL Mar 17 18:46:20.026354 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:46:20.078669 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:46:20.088330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:46:20.131866 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:46:20.249370 systemd[1]: Finished ldconfig.service. Mar 17 18:46:20.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.250272 systemd[1]: Mounted boot.mount. Mar 17 18:46:20.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.273087 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:46:20.284760 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:20.285248 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.288101 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:46:20.292518 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:46:20.296303 systemd[1]: Starting modprobe@loop.service... Mar 17 18:46:20.298127 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.298472 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:20.298752 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:20.301541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:46:20.301839 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:46:20.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.307068 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:20.308076 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.314769 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:46:20.316729 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.317154 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:20.321302 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:20.322633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:46:20.322957 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:46:20.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.326507 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:46:20.326758 systemd[1]: Finished modprobe@loop.service. Mar 17 18:46:20.327742 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:46:20.329223 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:46:20.329439 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:46:20.333606 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:20.334203 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.338328 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:46:20.342172 systemd[1]: Starting modprobe@drm.service... Mar 17 18:46:20.356943 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:46:20.360341 systemd[1]: Starting modprobe@loop.service... Mar 17 18:46:20.361442 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.361776 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:20.364812 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:46:20.367048 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:46:20.372674 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:46:20.373021 systemd[1]: Finished modprobe@drm.service. Mar 17 18:46:20.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.381446 systemd[1]: Finished ensure-sysext.service. Mar 17 18:46:20.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.385593 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:46:20.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.385858 systemd[1]: Finished modprobe@loop.service. Mar 17 18:46:20.387808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:46:20.388111 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:46:20.388995 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:46:20.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.390575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:46:20.390862 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:46:20.391689 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.398612 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:46:20.566621 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:46:20.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.571594 systemd[1]: Starting audit-rules.service... Mar 17 18:46:20.575381 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:46:20.579442 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:46:20.592956 systemd[1]: Starting systemd-resolved.service... Mar 17 18:46:20.605928 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:46:20.612773 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:46:20.615997 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:46:20.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.620955 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:46:20.641000 audit[1263]: SYSTEM_BOOT pid=1263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.647761 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:46:20.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.708990 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:46:20.712468 systemd[1]: Starting systemd-update-done.service... Mar 17 18:46:20.731319 systemd[1]: Finished systemd-update-done.service. Mar 17 18:46:20.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:46:20.741406 augenrules[1280]: No rules Mar 17 18:46:20.741000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:46:20.741000 audit[1280]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa9c6efe0 a2=420 a3=0 items=0 ppid=1256 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:46:20.741000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:46:20.743679 systemd[1]: Finished audit-rules.service. Mar 17 18:46:20.824534 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:46:20.825436 systemd[1]: Reached target time-set.target. Mar 17 18:46:20.829695 systemd-resolved[1260]: Positive Trust Anchors: Mar 17 18:46:20.830199 systemd-resolved[1260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:46:20.830409 systemd-resolved[1260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:46:20.846835 systemd-resolved[1260]: Using system hostname 'ci-3510.3.7-0-797a2fde87'. Mar 17 18:46:20.851519 systemd[1]: Started systemd-resolved.service. Mar 17 18:46:20.852323 systemd[1]: Reached target network.target. Mar 17 18:46:20.852902 systemd[1]: Reached target network-online.target. Mar 17 18:46:20.853390 systemd[1]: Reached target nss-lookup.target. Mar 17 18:46:20.853830 systemd[1]: Reached target sysinit.target. Mar 17 18:46:20.854435 systemd[1]: Started motdgen.path. Mar 17 18:46:20.854946 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:46:20.855743 systemd[1]: Started logrotate.timer. Mar 17 18:46:20.856826 systemd[1]: Started mdadm.timer. Mar 17 18:46:20.857273 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:46:20.857729 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:46:20.857762 systemd[1]: Reached target paths.target. Mar 17 18:46:20.858165 systemd[1]: Reached target timers.target. Mar 17 18:46:20.859146 systemd[1]: Listening on dbus.socket. Mar 17 18:46:20.864162 systemd[1]: Starting docker.socket... Mar 17 18:46:20.874360 systemd[1]: Listening on sshd.socket. Mar 17 18:46:20.875027 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:20.875784 systemd[1]: Listening on docker.socket. Mar 17 18:46:20.876385 systemd[1]: Reached target sockets.target. Mar 17 18:46:20.876914 systemd[1]: Reached target basic.target. Mar 17 18:46:20.877606 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:46:20.877688 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.877724 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:46:20.880041 systemd[1]: Starting containerd.service... Mar 17 18:46:20.883141 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:46:20.901233 systemd[1]: Starting dbus.service... Mar 17 18:46:20.906600 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:46:20.912760 systemd[1]: Starting extend-filesystems.service... Mar 17 18:46:20.918553 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:46:20.922568 systemd[1]: Starting kubelet.service... Mar 17 18:46:20.931279 systemd[1]: Starting motdgen.service... Mar 17 18:46:20.935923 systemd[1]: Starting prepare-helm.service... Mar 17 18:46:20.952884 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:46:20.958608 systemd[1]: Starting sshd-keygen.service... Mar 17 18:46:20.974262 systemd[1]: Starting systemd-logind.service... Mar 17 18:46:20.975251 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:46:20.975446 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:46:20.981396 systemd[1]: Starting update-engine.service... Mar 17 18:46:20.984530 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:46:21.052703 jq[1293]: false Mar 17 18:46:21.058349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:46:21.059000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:46:21.078178 tar[1310]: linux-amd64/helm Mar 17 18:46:21.108829 jq[1307]: true Mar 17 18:46:21.121461 systemd[1]: Started dbus.service. Mar 17 18:46:21.121064 dbus-daemon[1292]: [system] SELinux support is enabled Mar 17 18:46:21.126333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:46:21.126405 systemd[1]: Reached target system-config.target. Mar 17 18:46:21.127111 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:46:21.127156 systemd[1]: Reached target user-config.target. Mar 17 18:46:21.127962 systemd-timesyncd[1262]: Contacted time server 172.234.37.140:123 (0.flatcar.pool.ntp.org). Mar 17 18:46:21.128057 systemd-timesyncd[1262]: Initial clock synchronization to Mon 2025-03-17 18:46:21.143981 UTC. Mar 17 18:46:21.138306 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:46:21.138676 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:46:21.203603 jq[1325]: true Mar 17 18:46:21.209321 extend-filesystems[1294]: Found loop1 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda1 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda2 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda3 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found usr Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda4 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda6 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda7 Mar 17 18:46:21.215544 extend-filesystems[1294]: Found vda9 Mar 17 18:46:21.215544 extend-filesystems[1294]: Checking size of /dev/vda9 Mar 17 18:46:21.293454 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:46:21.294043 systemd[1]: Finished motdgen.service. Mar 17 18:46:21.332944 extend-filesystems[1294]: Resized partition /dev/vda9 Mar 17 18:46:21.347900 extend-filesystems[1348]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:46:21.354474 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 18:46:21.354529 update_engine[1306]: I0317 18:46:21.345780 1306 main.cc:92] Flatcar Update Engine starting Mar 17 18:46:21.354529 update_engine[1306]: I0317 18:46:21.353814 1306 update_check_scheduler.cc:74] Next update check in 9m13s Mar 17 18:46:21.353414 systemd[1]: Started update-engine.service. Mar 17 18:46:21.358011 systemd[1]: Started locksmithd.service. Mar 17 18:46:21.467730 bash[1356]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:46:21.469166 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:46:21.562030 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 18:46:21.588778 extend-filesystems[1348]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:46:21.588778 extend-filesystems[1348]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 18:46:21.588778 extend-filesystems[1348]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 18:46:21.591406 extend-filesystems[1294]: Resized filesystem in /dev/vda9 Mar 17 18:46:21.591406 extend-filesystems[1294]: Found vdb Mar 17 18:46:21.590350 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:46:21.590854 systemd[1]: Finished extend-filesystems.service. Mar 17 18:46:21.616056 env[1312]: time="2025-03-17T18:46:21.615957939Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:46:21.617730 coreos-metadata[1291]: Mar 17 18:46:21.617 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:46:21.641347 coreos-metadata[1291]: Mar 17 18:46:21.641 INFO Fetch successful Mar 17 18:46:21.658727 unknown[1291]: wrote ssh authorized keys file for user: core Mar 17 18:46:21.686124 update-ssh-keys[1369]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:46:21.686466 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:46:21.737154 systemd-logind[1305]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:46:21.740642 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:46:21.750692 systemd-logind[1305]: New seat seat0. Mar 17 18:46:21.758546 systemd[1]: Started systemd-logind.service. Mar 17 18:46:21.761076 env[1312]: time="2025-03-17T18:46:21.760856557Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:46:21.761391 env[1312]: time="2025-03-17T18:46:21.761299213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:46:21.776311 env[1312]: time="2025-03-17T18:46:21.775136910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:46:21.776311 env[1312]: time="2025-03-17T18:46:21.775283618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:46:21.777305 env[1312]: time="2025-03-17T18:46:21.777243018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:46:21.777512 env[1312]: time="2025-03-17T18:46:21.777487257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:46:21.777618 env[1312]: time="2025-03-17T18:46:21.777594938Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:46:21.777706 env[1312]: time="2025-03-17T18:46:21.777685712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:46:21.778248 env[1312]: time="2025-03-17T18:46:21.778182119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:46:21.779809 env[1312]: time="2025-03-17T18:46:21.779726922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:46:21.792834 env[1312]: time="2025-03-17T18:46:21.792769183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:46:21.800945 env[1312]: time="2025-03-17T18:46:21.800875676Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:46:21.801458 env[1312]: time="2025-03-17T18:46:21.801416130Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:46:21.801688 env[1312]: time="2025-03-17T18:46:21.801663165Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:46:21.816382 env[1312]: time="2025-03-17T18:46:21.816320335Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:46:21.816741 env[1312]: time="2025-03-17T18:46:21.816622692Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:46:21.816874 env[1312]: time="2025-03-17T18:46:21.816848602Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:46:21.817008 env[1312]: time="2025-03-17T18:46:21.816984575Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.817112 env[1312]: time="2025-03-17T18:46:21.817091113Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.817225 env[1312]: time="2025-03-17T18:46:21.817187018Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.817486 env[1312]: time="2025-03-17T18:46:21.817462971Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.818072 env[1312]: time="2025-03-17T18:46:21.817710007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.820371 env[1312]: time="2025-03-17T18:46:21.820299953Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.826011 env[1312]: time="2025-03-17T18:46:21.825910123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.826011 env[1312]: time="2025-03-17T18:46:21.825977693Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.826011 env[1312]: time="2025-03-17T18:46:21.826004318Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:46:21.826354 env[1312]: time="2025-03-17T18:46:21.826269779Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:46:21.826676 env[1312]: time="2025-03-17T18:46:21.826435853Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:46:21.827148 env[1312]: time="2025-03-17T18:46:21.827054779Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:46:21.827148 env[1312]: time="2025-03-17T18:46:21.827119582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827148 env[1312]: time="2025-03-17T18:46:21.827142644Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:46:21.827327 env[1312]: time="2025-03-17T18:46:21.827233400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827327 env[1312]: time="2025-03-17T18:46:21.827256219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827327 env[1312]: time="2025-03-17T18:46:21.827275335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827327 env[1312]: time="2025-03-17T18:46:21.827293588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827327 env[1312]: time="2025-03-17T18:46:21.827315565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827538 env[1312]: time="2025-03-17T18:46:21.827336259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827538 env[1312]: time="2025-03-17T18:46:21.827355467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827538 env[1312]: time="2025-03-17T18:46:21.827373051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827538 env[1312]: time="2025-03-17T18:46:21.827401418Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827565922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827589150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827608623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827627539Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827655468Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827673975Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827701828Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:46:21.827856 env[1312]: time="2025-03-17T18:46:21.827754314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:46:21.828273 env[1312]: time="2025-03-17T18:46:21.828063881Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:46:21.828273 env[1312]: time="2025-03-17T18:46:21.828154511Z" level=info msg="Connect containerd service" Mar 17 18:46:21.828273 env[1312]: time="2025-03-17T18:46:21.828244739Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:46:21.832400 env[1312]: time="2025-03-17T18:46:21.832186080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:46:21.832973 env[1312]: time="2025-03-17T18:46:21.832918915Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:46:21.833041 env[1312]: time="2025-03-17T18:46:21.833027310Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:46:21.833378 systemd[1]: Started containerd.service. Mar 17 18:46:21.847356 env[1312]: time="2025-03-17T18:46:21.846621891Z" level=info msg="Start subscribing containerd event" Mar 17 18:46:21.847356 env[1312]: time="2025-03-17T18:46:21.846847071Z" level=info msg="Start recovering state" Mar 17 18:46:21.847356 env[1312]: time="2025-03-17T18:46:21.847185239Z" level=info msg="Start event monitor" Mar 17 18:46:21.847356 env[1312]: time="2025-03-17T18:46:21.847270677Z" level=info msg="Start snapshots syncer" Mar 17 18:46:21.847356 env[1312]: time="2025-03-17T18:46:21.847295938Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:46:21.847356 env[1312]: time="2025-03-17T18:46:21.847308899Z" level=info msg="Start streaming server" Mar 17 18:46:21.865980 env[1312]: time="2025-03-17T18:46:21.865760361Z" level=info msg="containerd successfully booted in 0.283371s" Mar 17 18:46:22.239831 sshd_keygen[1329]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:46:22.320646 systemd[1]: Finished sshd-keygen.service. Mar 17 18:46:22.324203 systemd[1]: Starting issuegen.service... Mar 17 18:46:22.362510 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:46:22.362932 systemd[1]: Finished issuegen.service. Mar 17 18:46:22.371619 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:46:22.436339 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:46:22.440320 systemd[1]: Started getty@tty1.service. Mar 17 18:46:22.448114 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:46:22.450053 systemd[1]: Reached target getty.target. Mar 17 18:46:22.995478 locksmithd[1351]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:46:23.256474 tar[1310]: linux-amd64/LICENSE Mar 17 18:46:23.256474 tar[1310]: linux-amd64/README.md Mar 17 18:46:23.273405 systemd[1]: Finished prepare-helm.service. Mar 17 18:46:24.518242 systemd[1]: Started kubelet.service. Mar 17 18:46:24.523906 systemd[1]: Reached target multi-user.target. Mar 17 18:46:24.550277 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:46:24.575958 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:46:24.576601 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:46:24.602026 systemd[1]: Startup finished in 10.214s (kernel) + 13.823s (userspace) = 24.037s. Mar 17 18:46:26.170462 kubelet[1408]: E0317 18:46:26.170395 1408 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:46:26.173624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:46:26.174005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:46:29.331633 systemd[1]: Created slice system-sshd.slice. Mar 17 18:46:29.335664 systemd[1]: Started sshd@0-134.199.210.138:22-139.178.68.195:34956.service. Mar 17 18:46:29.504736 sshd[1417]: Accepted publickey for core from 139.178.68.195 port 34956 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:46:29.510433 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:46:29.544082 systemd[1]: Created slice user-500.slice. Mar 17 18:46:29.549104 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:46:29.562176 systemd-logind[1305]: New session 1 of user core. Mar 17 18:46:29.604728 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:46:29.607815 systemd[1]: Starting user@500.service... Mar 17 18:46:29.627820 (systemd)[1421]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:46:29.821885 systemd[1421]: Queued start job for default target default.target. Mar 17 18:46:29.822345 systemd[1421]: Reached target paths.target. Mar 17 18:46:29.822387 systemd[1421]: Reached target sockets.target. Mar 17 18:46:29.822411 systemd[1421]: Reached target timers.target. Mar 17 18:46:29.822431 systemd[1421]: Reached target basic.target. Mar 17 18:46:29.822516 systemd[1421]: Reached target default.target. Mar 17 18:46:29.822574 systemd[1421]: Startup finished in 167ms. Mar 17 18:46:29.822695 systemd[1]: Started user@500.service. Mar 17 18:46:29.826705 systemd[1]: Started session-1.scope. Mar 17 18:46:29.900546 systemd[1]: Started sshd@1-134.199.210.138:22-139.178.68.195:34964.service. Mar 17 18:46:30.026778 sshd[1431]: Accepted publickey for core from 139.178.68.195 port 34964 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:46:30.029718 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:46:30.041328 systemd-logind[1305]: New session 2 of user core. Mar 17 18:46:30.041725 systemd[1]: Started session-2.scope. Mar 17 18:46:30.151573 sshd[1431]: pam_unix(sshd:session): session closed for user core Mar 17 18:46:30.162666 systemd[1]: Started sshd@2-134.199.210.138:22-139.178.68.195:34976.service. Mar 17 18:46:30.173108 systemd[1]: sshd@1-134.199.210.138:22-139.178.68.195:34964.service: Deactivated successfully. Mar 17 18:46:30.175667 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:46:30.176606 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:46:30.185808 systemd-logind[1305]: Removed session 2. Mar 17 18:46:30.246293 sshd[1436]: Accepted publickey for core from 139.178.68.195 port 34976 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:46:30.255728 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:46:30.266374 systemd[1]: Started session-3.scope. Mar 17 18:46:30.268221 systemd-logind[1305]: New session 3 of user core. Mar 17 18:46:30.370777 sshd[1436]: pam_unix(sshd:session): session closed for user core Mar 17 18:46:30.377148 systemd[1]: Started sshd@3-134.199.210.138:22-139.178.68.195:34986.service. Mar 17 18:46:30.383960 systemd[1]: sshd@2-134.199.210.138:22-139.178.68.195:34976.service: Deactivated successfully. Mar 17 18:46:30.388797 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:46:30.389769 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:46:30.394770 systemd-logind[1305]: Removed session 3. Mar 17 18:46:30.443114 sshd[1443]: Accepted publickey for core from 139.178.68.195 port 34986 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:46:30.446289 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:46:30.476650 systemd-logind[1305]: New session 4 of user core. Mar 17 18:46:30.481047 systemd[1]: Started session-4.scope. Mar 17 18:46:30.576530 sshd[1443]: pam_unix(sshd:session): session closed for user core Mar 17 18:46:30.587923 systemd[1]: Started sshd@4-134.199.210.138:22-139.178.68.195:34990.service. Mar 17 18:46:30.595881 systemd[1]: sshd@3-134.199.210.138:22-139.178.68.195:34986.service: Deactivated successfully. Mar 17 18:46:30.597556 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:46:30.605289 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:46:30.607367 systemd-logind[1305]: Removed session 4. Mar 17 18:46:30.693358 sshd[1450]: Accepted publickey for core from 139.178.68.195 port 34990 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:46:30.698500 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:46:30.711376 systemd-logind[1305]: New session 5 of user core. Mar 17 18:46:30.720688 systemd[1]: Started session-5.scope. Mar 17 18:46:30.832934 sudo[1456]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:46:30.833495 sudo[1456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:46:30.914503 systemd[1]: Starting docker.service... Mar 17 18:46:31.066459 env[1466]: time="2025-03-17T18:46:31.066289527Z" level=info msg="Starting up" Mar 17 18:46:31.072193 env[1466]: time="2025-03-17T18:46:31.071878139Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:46:31.072193 env[1466]: time="2025-03-17T18:46:31.071931826Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:46:31.072193 env[1466]: time="2025-03-17T18:46:31.071968169Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:46:31.072193 env[1466]: time="2025-03-17T18:46:31.071988756Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:46:31.078442 env[1466]: time="2025-03-17T18:46:31.077884024Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:46:31.078442 env[1466]: time="2025-03-17T18:46:31.077937263Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:46:31.078442 env[1466]: time="2025-03-17T18:46:31.077967270Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:46:31.078442 env[1466]: time="2025-03-17T18:46:31.077986938Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:46:31.097917 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport335974492-merged.mount: Deactivated successfully. Mar 17 18:46:31.207029 env[1466]: time="2025-03-17T18:46:31.206933047Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:46:31.207029 env[1466]: time="2025-03-17T18:46:31.206980926Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:46:31.208354 env[1466]: time="2025-03-17T18:46:31.207355721Z" level=info msg="Loading containers: start." Mar 17 18:46:31.450315 kernel: Initializing XFRM netlink socket Mar 17 18:46:31.521399 env[1466]: time="2025-03-17T18:46:31.520813605Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:46:31.671209 systemd-networkd[1080]: docker0: Link UP Mar 17 18:46:31.715315 env[1466]: time="2025-03-17T18:46:31.715203322Z" level=info msg="Loading containers: done." Mar 17 18:46:31.754579 env[1466]: time="2025-03-17T18:46:31.751305712Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:46:31.754579 env[1466]: time="2025-03-17T18:46:31.751642126Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:46:31.754579 env[1466]: time="2025-03-17T18:46:31.751840582Z" level=info msg="Daemon has completed initialization" Mar 17 18:46:31.754736 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2339543922-merged.mount: Deactivated successfully. Mar 17 18:46:31.785976 systemd[1]: Started docker.service. Mar 17 18:46:31.804900 env[1466]: time="2025-03-17T18:46:31.804809560Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:46:31.846120 systemd[1]: Starting coreos-metadata.service... Mar 17 18:46:31.915244 coreos-metadata[1584]: Mar 17 18:46:31.914 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 18:46:31.933235 coreos-metadata[1584]: Mar 17 18:46:31.933 INFO Fetch successful Mar 17 18:46:31.961336 systemd[1]: Finished coreos-metadata.service. Mar 17 18:46:33.638237 env[1312]: time="2025-03-17T18:46:33.638094684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:46:34.318975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644693287.mount: Deactivated successfully. Mar 17 18:46:36.436668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:46:36.437030 systemd[1]: Stopped kubelet.service. Mar 17 18:46:36.440543 systemd[1]: Starting kubelet.service... Mar 17 18:46:36.671723 systemd[1]: Started kubelet.service. Mar 17 18:46:36.723052 env[1312]: time="2025-03-17T18:46:36.722351511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:36.727337 env[1312]: time="2025-03-17T18:46:36.727019889Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:36.731355 env[1312]: time="2025-03-17T18:46:36.731286226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:36.751061 env[1312]: time="2025-03-17T18:46:36.750939671Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:36.752307 env[1312]: time="2025-03-17T18:46:36.752207252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:46:36.782564 env[1312]: time="2025-03-17T18:46:36.782488691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:46:36.833932 kubelet[1619]: E0317 18:46:36.833788 1619 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:46:36.842140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:46:36.842675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:46:39.538155 env[1312]: time="2025-03-17T18:46:39.538070185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:39.544667 env[1312]: time="2025-03-17T18:46:39.544595958Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:39.547035 env[1312]: time="2025-03-17T18:46:39.546943974Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:46:39.552315 env[1312]: time="2025-03-17T18:46:39.549042493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:39.552315 env[1312]: time="2025-03-17T18:46:39.550632462Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:39.567833 env[1312]: time="2025-03-17T18:46:39.567716815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:46:41.555975 env[1312]: time="2025-03-17T18:46:41.555844239Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:41.558808 env[1312]: time="2025-03-17T18:46:41.558740171Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:41.562372 env[1312]: time="2025-03-17T18:46:41.562302375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:41.565149 env[1312]: time="2025-03-17T18:46:41.565076356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:41.566597 env[1312]: time="2025-03-17T18:46:41.566530226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:46:41.593042 env[1312]: time="2025-03-17T18:46:41.592988651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:46:43.083003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688015562.mount: Deactivated successfully. Mar 17 18:46:44.161588 env[1312]: time="2025-03-17T18:46:44.161502094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:44.165186 env[1312]: time="2025-03-17T18:46:44.165091242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:44.168723 env[1312]: time="2025-03-17T18:46:44.168641972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:44.177647 env[1312]: time="2025-03-17T18:46:44.174918912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:44.177647 env[1312]: time="2025-03-17T18:46:44.175322151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:46:44.203751 env[1312]: time="2025-03-17T18:46:44.203646806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:46:44.696760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741736224.mount: Deactivated successfully. Mar 17 18:46:46.090816 env[1312]: time="2025-03-17T18:46:46.090480028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.093849 env[1312]: time="2025-03-17T18:46:46.093662674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.097014 env[1312]: time="2025-03-17T18:46:46.096941809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.099874 env[1312]: time="2025-03-17T18:46:46.099822409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.101860 env[1312]: time="2025-03-17T18:46:46.101780349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:46:46.123889 env[1312]: time="2025-03-17T18:46:46.123829538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:46:46.667332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110030040.mount: Deactivated successfully. Mar 17 18:46:46.679358 env[1312]: time="2025-03-17T18:46:46.679252785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.683989 env[1312]: time="2025-03-17T18:46:46.683908567Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.687398 env[1312]: time="2025-03-17T18:46:46.687337107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.689661 env[1312]: time="2025-03-17T18:46:46.689603325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:46.690269 env[1312]: time="2025-03-17T18:46:46.690192005Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:46:46.725539 env[1312]: time="2025-03-17T18:46:46.725491766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:46:47.094174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:46:47.094449 systemd[1]: Stopped kubelet.service. Mar 17 18:46:47.097720 systemd[1]: Starting kubelet.service... Mar 17 18:46:47.267502 systemd[1]: Started kubelet.service. Mar 17 18:46:47.325822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519257183.mount: Deactivated successfully. Mar 17 18:46:47.436422 kubelet[1668]: E0317 18:46:47.436277 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:46:47.439270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:46:47.439706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:46:51.676120 env[1312]: time="2025-03-17T18:46:51.676013118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:51.684809 env[1312]: time="2025-03-17T18:46:51.683529762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:51.690887 env[1312]: time="2025-03-17T18:46:51.689585845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:51.698386 env[1312]: time="2025-03-17T18:46:51.694788712Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:51.698386 env[1312]: time="2025-03-17T18:46:51.695401013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:46:56.228902 systemd[1]: Stopped kubelet.service. Mar 17 18:46:56.238630 systemd[1]: Starting kubelet.service... Mar 17 18:46:56.295860 systemd[1]: Reloading. Mar 17 18:46:56.557472 /usr/lib/systemd/system-generators/torcx-generator[1771]: time="2025-03-17T18:46:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:46:56.557522 /usr/lib/systemd/system-generators/torcx-generator[1771]: time="2025-03-17T18:46:56Z" level=info msg="torcx already run" Mar 17 18:46:56.801711 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:46:56.811271 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:46:56.875999 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:46:57.056688 systemd[1]: Started kubelet.service. Mar 17 18:46:57.071963 systemd[1]: Stopping kubelet.service... Mar 17 18:46:57.075012 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:46:57.075835 systemd[1]: Stopped kubelet.service. Mar 17 18:46:57.082656 systemd[1]: Starting kubelet.service... Mar 17 18:46:57.272244 systemd[1]: Started kubelet.service. Mar 17 18:46:57.420252 kubelet[1836]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:46:57.420897 kubelet[1836]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:46:57.421021 kubelet[1836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:46:57.421342 kubelet[1836]: I0317 18:46:57.421280 1836 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:46:57.897425 kubelet[1836]: I0317 18:46:57.897367 1836 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:46:57.897695 kubelet[1836]: I0317 18:46:57.897673 1836 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:46:57.898382 kubelet[1836]: I0317 18:46:57.898181 1836 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:46:57.965928 kubelet[1836]: I0317 18:46:57.965230 1836 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:46:57.982208 kubelet[1836]: E0317 18:46:57.982171 1836 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://134.199.210.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.010866 kubelet[1836]: I0317 18:46:58.010815 1836 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:46:58.016623 kubelet[1836]: I0317 18:46:58.016493 1836 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:46:58.017676 kubelet[1836]: I0317 18:46:58.016908 1836 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-0-797a2fde87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:46:58.020546 kubelet[1836]: I0317 18:46:58.020501 1836 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:46:58.020946 kubelet[1836]: I0317 18:46:58.020924 1836 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:46:58.024826 kubelet[1836]: I0317 18:46:58.023298 1836 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:46:58.026436 kubelet[1836]: I0317 18:46:58.026206 1836 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:46:58.026436 kubelet[1836]: I0317 18:46:58.026429 1836 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:46:58.026705 kubelet[1836]: I0317 18:46:58.026492 1836 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:46:58.026705 kubelet[1836]: I0317 18:46:58.026525 1836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:46:58.042963 kubelet[1836]: I0317 18:46:58.042142 1836 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:46:58.045932 kubelet[1836]: I0317 18:46:58.045858 1836 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:46:58.046185 kubelet[1836]: W0317 18:46:58.045998 1836 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:46:58.047711 kubelet[1836]: I0317 18:46:58.047660 1836 server.go:1264] "Started kubelet" Mar 17 18:46:58.047932 kubelet[1836]: W0317 18:46:58.047861 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.210.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.048048 kubelet[1836]: E0317 18:46:58.047949 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://134.199.210.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.048173 kubelet[1836]: W0317 18:46:58.048111 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.210.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-0-797a2fde87&limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.048302 kubelet[1836]: E0317 18:46:58.048281 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://134.199.210.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-0-797a2fde87&limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.073268 kubelet[1836]: I0317 18:46:58.073154 1836 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:46:58.073879 kubelet[1836]: I0317 18:46:58.073787 1836 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:46:58.074561 kubelet[1836]: I0317 18:46:58.074525 1836 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:46:58.075539 kubelet[1836]: I0317 18:46:58.075511 1836 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:46:58.076913 kubelet[1836]: E0317 18:46:58.076759 1836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.210.138:6443/api/v1/namespaces/default/events\": dial tcp 134.199.210.138:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-0-797a2fde87.182dab84e0316056 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-0-797a2fde87,UID:ci-3510.3.7-0-797a2fde87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-0-797a2fde87,},FirstTimestamp:2025-03-17 18:46:58.047615062 +0000 UTC m=+0.744438278,LastTimestamp:2025-03-17 18:46:58.047615062 +0000 UTC m=+0.744438278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-0-797a2fde87,}" Mar 17 18:46:58.081922 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:46:58.085733 kubelet[1836]: I0317 18:46:58.082807 1836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:46:58.092318 kubelet[1836]: E0317 18:46:58.089826 1836 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:46:58.098900 kubelet[1836]: I0317 18:46:58.098831 1836 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:46:58.099602 kubelet[1836]: I0317 18:46:58.099559 1836 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:46:58.099758 kubelet[1836]: I0317 18:46:58.099675 1836 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:46:58.101290 kubelet[1836]: I0317 18:46:58.101230 1836 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:46:58.103132 kubelet[1836]: I0317 18:46:58.103072 1836 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:46:58.104046 kubelet[1836]: W0317 18:46:58.103828 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.210.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.104372 kubelet[1836]: E0317 18:46:58.104080 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://134.199.210.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.104372 kubelet[1836]: E0317 18:46:58.104236 1836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-0-797a2fde87?timeout=10s\": dial tcp 134.199.210.138:6443: connect: connection refused" interval="200ms" Mar 17 18:46:58.106189 kubelet[1836]: I0317 18:46:58.106148 1836 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:46:58.162781 kubelet[1836]: I0317 18:46:58.162606 1836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:46:58.169532 kubelet[1836]: I0317 18:46:58.168291 1836 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:46:58.169532 kubelet[1836]: I0317 18:46:58.168351 1836 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:46:58.169532 kubelet[1836]: I0317 18:46:58.168394 1836 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:46:58.169532 kubelet[1836]: E0317 18:46:58.168478 1836 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:46:58.177554 kubelet[1836]: W0317 18:46:58.177496 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.210.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.177768 kubelet[1836]: E0317 18:46:58.177576 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://134.199.210.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:58.183961 kubelet[1836]: I0317 18:46:58.183917 1836 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:46:58.183961 kubelet[1836]: I0317 18:46:58.183947 1836 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:46:58.184199 kubelet[1836]: I0317 18:46:58.184177 1836 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:46:58.186981 kubelet[1836]: I0317 18:46:58.186945 1836 policy_none.go:49] "None policy: Start" Mar 17 18:46:58.188407 kubelet[1836]: I0317 18:46:58.188371 1836 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:46:58.188662 kubelet[1836]: I0317 18:46:58.188644 1836 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:46:58.198176 kubelet[1836]: I0317 18:46:58.198126 1836 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:46:58.198752 kubelet[1836]: I0317 18:46:58.198690 1836 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:46:58.199065 kubelet[1836]: I0317 18:46:58.199046 1836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:46:58.201925 kubelet[1836]: I0317 18:46:58.201489 1836 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.202395 kubelet[1836]: E0317 18:46:58.202346 1836 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://134.199.210.138:6443/api/v1/nodes\": dial tcp 134.199.210.138:6443: connect: connection refused" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.203455 kubelet[1836]: E0317 18:46:58.203356 1836 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-0-797a2fde87\" not found" Mar 17 18:46:58.268822 kubelet[1836]: I0317 18:46:58.268722 1836 topology_manager.go:215] "Topology Admit Handler" podUID="adda34ff83d51806db68cda8ecf55521" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.271630 kubelet[1836]: I0317 18:46:58.271538 1836 topology_manager.go:215] "Topology Admit Handler" podUID="a522ab8894f4ff996e8d0c5956936809" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.280694 kubelet[1836]: I0317 18:46:58.280309 1836 topology_manager.go:215] "Topology Admit Handler" podUID="82c83ac703e95cd036982facf5cd2cff" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.305858 kubelet[1836]: E0317 18:46:58.305759 1836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-0-797a2fde87?timeout=10s\": dial tcp 134.199.210.138:6443: connect: connection refused" interval="400ms" Mar 17 18:46:58.407206 kubelet[1836]: I0317 18:46:58.400490 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a522ab8894f4ff996e8d0c5956936809-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-0-797a2fde87\" (UID: \"a522ab8894f4ff996e8d0c5956936809\") " pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.407206 kubelet[1836]: I0317 18:46:58.407111 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.409955 kubelet[1836]: I0317 18:46:58.409641 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.409955 kubelet[1836]: I0317 18:46:58.409692 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.409955 kubelet[1836]: I0317 18:46:58.409724 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.409955 kubelet[1836]: I0317 18:46:58.409758 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adda34ff83d51806db68cda8ecf55521-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-0-797a2fde87\" (UID: \"adda34ff83d51806db68cda8ecf55521\") " pod="kube-system/kube-scheduler-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.409955 kubelet[1836]: I0317 18:46:58.409784 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a522ab8894f4ff996e8d0c5956936809-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-0-797a2fde87\" (UID: \"a522ab8894f4ff996e8d0c5956936809\") " pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.410671 kubelet[1836]: I0317 18:46:58.409812 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a522ab8894f4ff996e8d0c5956936809-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-0-797a2fde87\" (UID: \"a522ab8894f4ff996e8d0c5956936809\") " pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.410671 kubelet[1836]: I0317 18:46:58.409839 1836 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.412058 kubelet[1836]: I0317 18:46:58.411375 1836 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.412058 kubelet[1836]: E0317 18:46:58.411999 1836 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://134.199.210.138:6443/api/v1/nodes\": dial tcp 134.199.210.138:6443: connect: connection refused" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.578133 kubelet[1836]: E0317 18:46:58.578083 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:46:58.580206 env[1312]: time="2025-03-17T18:46:58.580076998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-0-797a2fde87,Uid:adda34ff83d51806db68cda8ecf55521,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:58.596425 kubelet[1836]: E0317 18:46:58.595185 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:46:58.600270 env[1312]: time="2025-03-17T18:46:58.596992350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-0-797a2fde87,Uid:a522ab8894f4ff996e8d0c5956936809,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:58.600270 env[1312]: time="2025-03-17T18:46:58.600019878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-0-797a2fde87,Uid:82c83ac703e95cd036982facf5cd2cff,Namespace:kube-system,Attempt:0,}" Mar 17 18:46:58.600561 kubelet[1836]: E0317 18:46:58.598050 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:46:58.707866 kubelet[1836]: E0317 18:46:58.707697 1836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-0-797a2fde87?timeout=10s\": dial tcp 134.199.210.138:6443: connect: connection refused" interval="800ms" Mar 17 18:46:58.815051 kubelet[1836]: I0317 18:46:58.813798 1836 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:58.815051 kubelet[1836]: E0317 18:46:58.814261 1836 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://134.199.210.138:6443/api/v1/nodes\": dial tcp 134.199.210.138:6443: connect: connection refused" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:59.099081 kubelet[1836]: W0317 18:46:59.098894 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.210.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.099081 kubelet[1836]: E0317 18:46:59.099011 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://134.199.210.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.153927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718443890.mount: Deactivated successfully. Mar 17 18:46:59.162934 env[1312]: time="2025-03-17T18:46:59.162871257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.169904 env[1312]: time="2025-03-17T18:46:59.169830838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.173807 env[1312]: time="2025-03-17T18:46:59.173740446Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.176532 env[1312]: time="2025-03-17T18:46:59.176454223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.178438 env[1312]: time="2025-03-17T18:46:59.178180088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.187301 env[1312]: time="2025-03-17T18:46:59.187234360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.194797 env[1312]: time="2025-03-17T18:46:59.194719838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.200894 env[1312]: time="2025-03-17T18:46:59.200827785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.204115 env[1312]: time="2025-03-17T18:46:59.204013526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.205334 env[1312]: time="2025-03-17T18:46:59.205197419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.206778 env[1312]: time="2025-03-17T18:46:59.206509834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.207848 env[1312]: time="2025-03-17T18:46:59.207729648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:46:59.283422 env[1312]: time="2025-03-17T18:46:59.281589013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:59.283422 env[1312]: time="2025-03-17T18:46:59.281703807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:59.283422 env[1312]: time="2025-03-17T18:46:59.281733761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:59.283422 env[1312]: time="2025-03-17T18:46:59.281950703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b9c7430240d7bfddf7dfb454371e9320030067961f9a54f82b14afb4d401404 pid=1876 runtime=io.containerd.runc.v2 Mar 17 18:46:59.325802 env[1312]: time="2025-03-17T18:46:59.325641608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:59.326043 env[1312]: time="2025-03-17T18:46:59.325834273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:59.326043 env[1312]: time="2025-03-17T18:46:59.325923319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:59.326624 env[1312]: time="2025-03-17T18:46:59.326510509Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff3f432b475c90d95b145f9c175efe2a0fbcc7c7e8d539a91fd49a712166ce72 pid=1903 runtime=io.containerd.runc.v2 Mar 17 18:46:59.331695 env[1312]: time="2025-03-17T18:46:59.331131628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:46:59.331695 env[1312]: time="2025-03-17T18:46:59.331329436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:46:59.331695 env[1312]: time="2025-03-17T18:46:59.331405485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:46:59.332740 env[1312]: time="2025-03-17T18:46:59.332339972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1030f49f613305ca9edd880413331694c132d586ad370d6c1636936295c6ca91 pid=1904 runtime=io.containerd.runc.v2 Mar 17 18:46:59.404319 kubelet[1836]: W0317 18:46:59.403999 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.210.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.404319 kubelet[1836]: E0317 18:46:59.404070 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://134.199.210.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.409865 kubelet[1836]: W0317 18:46:59.409705 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.210.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.409865 kubelet[1836]: E0317 18:46:59.409789 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://134.199.210.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.515020 kubelet[1836]: E0317 18:46:59.514883 1836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.210.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-0-797a2fde87?timeout=10s\": dial tcp 134.199.210.138:6443: connect: connection refused" interval="1.6s" Mar 17 18:46:59.536923 kubelet[1836]: W0317 18:46:59.536722 1836 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.210.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-0-797a2fde87&limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.536923 kubelet[1836]: E0317 18:46:59.536874 1836 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://134.199.210.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-0-797a2fde87&limit=500&resourceVersion=0": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:46:59.540544 env[1312]: time="2025-03-17T18:46:59.540469574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-0-797a2fde87,Uid:a522ab8894f4ff996e8d0c5956936809,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff3f432b475c90d95b145f9c175efe2a0fbcc7c7e8d539a91fd49a712166ce72\"" Mar 17 18:46:59.544697 kubelet[1836]: E0317 18:46:59.544080 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:46:59.550501 env[1312]: time="2025-03-17T18:46:59.550428606Z" level=info msg="CreateContainer within sandbox \"ff3f432b475c90d95b145f9c175efe2a0fbcc7c7e8d539a91fd49a712166ce72\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:46:59.588556 env[1312]: time="2025-03-17T18:46:59.588404610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-0-797a2fde87,Uid:82c83ac703e95cd036982facf5cd2cff,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b9c7430240d7bfddf7dfb454371e9320030067961f9a54f82b14afb4d401404\"" Mar 17 18:46:59.594546 kubelet[1836]: E0317 18:46:59.594386 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:46:59.599991 env[1312]: time="2025-03-17T18:46:59.599909275Z" level=info msg="CreateContainer within sandbox \"2b9c7430240d7bfddf7dfb454371e9320030067961f9a54f82b14afb4d401404\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:46:59.612137 env[1312]: time="2025-03-17T18:46:59.612077413Z" level=info msg="CreateContainer within sandbox \"ff3f432b475c90d95b145f9c175efe2a0fbcc7c7e8d539a91fd49a712166ce72\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90b57ce725eee84b222cb595244ddefb1d73e89b208207b093b025a9354137ec\"" Mar 17 18:46:59.614145 env[1312]: time="2025-03-17T18:46:59.614096567Z" level=info msg="StartContainer for \"90b57ce725eee84b222cb595244ddefb1d73e89b208207b093b025a9354137ec\"" Mar 17 18:46:59.615650 env[1312]: time="2025-03-17T18:46:59.615593531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-0-797a2fde87,Uid:adda34ff83d51806db68cda8ecf55521,Namespace:kube-system,Attempt:0,} returns sandbox id \"1030f49f613305ca9edd880413331694c132d586ad370d6c1636936295c6ca91\"" Mar 17 18:46:59.623719 kubelet[1836]: I0317 18:46:59.623673 1836 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:59.624811 kubelet[1836]: E0317 18:46:59.624754 1836 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://134.199.210.138:6443/api/v1/nodes\": dial tcp 134.199.210.138:6443: connect: connection refused" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:46:59.625430 kubelet[1836]: E0317 18:46:59.625368 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:46:59.629262 env[1312]: time="2025-03-17T18:46:59.629179784Z" level=info msg="CreateContainer within sandbox \"1030f49f613305ca9edd880413331694c132d586ad370d6c1636936295c6ca91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:46:59.631676 env[1312]: time="2025-03-17T18:46:59.631591425Z" level=info msg="CreateContainer within sandbox \"2b9c7430240d7bfddf7dfb454371e9320030067961f9a54f82b14afb4d401404\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"217c13720193ff845f97acbed4f5dd0392d1c08337546b7bb4fe685a64e22f21\"" Mar 17 18:46:59.632679 env[1312]: time="2025-03-17T18:46:59.632607651Z" level=info msg="StartContainer for \"217c13720193ff845f97acbed4f5dd0392d1c08337546b7bb4fe685a64e22f21\"" Mar 17 18:46:59.663548 env[1312]: time="2025-03-17T18:46:59.663365157Z" level=info msg="CreateContainer within sandbox \"1030f49f613305ca9edd880413331694c132d586ad370d6c1636936295c6ca91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc36de3b64159bed94ba1cd44d53129f01dd166181543f5109b04434ae0b1a1a\"" Mar 17 18:46:59.664860 env[1312]: time="2025-03-17T18:46:59.664792132Z" level=info msg="StartContainer for \"dc36de3b64159bed94ba1cd44d53129f01dd166181543f5109b04434ae0b1a1a\"" Mar 17 18:46:59.832674 env[1312]: time="2025-03-17T18:46:59.832594144Z" level=info msg="StartContainer for \"217c13720193ff845f97acbed4f5dd0392d1c08337546b7bb4fe685a64e22f21\" returns successfully" Mar 17 18:46:59.849127 env[1312]: time="2025-03-17T18:46:59.849060456Z" level=info msg="StartContainer for \"90b57ce725eee84b222cb595244ddefb1d73e89b208207b093b025a9354137ec\" returns successfully" Mar 17 18:46:59.940975 env[1312]: time="2025-03-17T18:46:59.940904098Z" level=info msg="StartContainer for \"dc36de3b64159bed94ba1cd44d53129f01dd166181543f5109b04434ae0b1a1a\" returns successfully" Mar 17 18:47:00.018681 kubelet[1836]: E0317 18:47:00.018614 1836 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://134.199.210.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 134.199.210.138:6443: connect: connection refused Mar 17 18:47:00.197933 kubelet[1836]: E0317 18:47:00.197759 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:00.204352 kubelet[1836]: E0317 18:47:00.204289 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:00.232553 kubelet[1836]: E0317 18:47:00.232457 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:01.216357 kubelet[1836]: E0317 18:47:01.216307 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:01.227550 kubelet[1836]: I0317 18:47:01.227502 1836 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:47:02.779013 kubelet[1836]: E0317 18:47:02.778959 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:04.302656 kubelet[1836]: E0317 18:47:04.302582 1836 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-0-797a2fde87\" not found" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:47:04.506351 kubelet[1836]: I0317 18:47:04.506294 1836 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:47:05.052870 kubelet[1836]: I0317 18:47:05.052813 1836 apiserver.go:52] "Watching apiserver" Mar 17 18:47:05.105366 kubelet[1836]: I0317 18:47:05.100911 1836 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:47:06.576912 update_engine[1306]: I0317 18:47:06.575757 1306 update_attempter.cc:509] Updating boot flags... Mar 17 18:47:07.790486 systemd[1]: Reloading. Mar 17 18:47:08.037379 /usr/lib/systemd/system-generators/torcx-generator[2143]: time="2025-03-17T18:47:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:47:08.037632 /usr/lib/systemd/system-generators/torcx-generator[2143]: time="2025-03-17T18:47:08Z" level=info msg="torcx already run" Mar 17 18:47:08.155262 kubelet[1836]: W0317 18:47:08.154518 1836 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:47:08.156700 kubelet[1836]: E0317 18:47:08.156651 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:08.259106 kubelet[1836]: E0317 18:47:08.259058 1836 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:08.432470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:47:08.434543 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:47:08.485090 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:47:08.740364 systemd[1]: Stopping kubelet.service... Mar 17 18:47:08.772111 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:47:08.772624 systemd[1]: Stopped kubelet.service. Mar 17 18:47:08.776450 systemd[1]: Starting kubelet.service... Mar 17 18:47:10.658060 systemd[1]: Started kubelet.service. Mar 17 18:47:10.896582 sudo[2217]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:47:10.897011 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:47:10.916577 kubelet[2206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:47:10.916577 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:47:10.916577 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:47:10.916577 kubelet[2206]: I0317 18:47:10.914843 2206 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:47:10.927035 kubelet[2206]: I0317 18:47:10.926125 2206 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:47:10.927035 kubelet[2206]: I0317 18:47:10.926177 2206 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:47:10.927035 kubelet[2206]: I0317 18:47:10.926711 2206 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:47:10.930255 kubelet[2206]: I0317 18:47:10.929677 2206 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:47:10.932838 kubelet[2206]: I0317 18:47:10.932705 2206 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:47:10.951285 kubelet[2206]: I0317 18:47:10.951195 2206 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:47:10.952201 kubelet[2206]: I0317 18:47:10.952138 2206 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:47:10.952443 kubelet[2206]: I0317 18:47:10.952193 2206 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-0-797a2fde87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:47:10.952580 kubelet[2206]: I0317 18:47:10.952458 2206 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:47:10.952580 kubelet[2206]: I0317 18:47:10.952472 2206 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:47:10.952580 kubelet[2206]: I0317 18:47:10.952533 2206 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:47:10.952752 kubelet[2206]: I0317 18:47:10.952712 2206 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:47:10.961686 kubelet[2206]: I0317 18:47:10.961642 2206 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:47:10.961891 kubelet[2206]: I0317 18:47:10.961803 2206 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:47:10.962074 kubelet[2206]: I0317 18:47:10.962044 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:47:10.983816 kubelet[2206]: I0317 18:47:10.983284 2206 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:47:10.990428 kubelet[2206]: I0317 18:47:10.990381 2206 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:47:10.994637 kubelet[2206]: I0317 18:47:10.992909 2206 server.go:1264] "Started kubelet" Mar 17 18:47:10.997668 kubelet[2206]: I0317 18:47:10.996884 2206 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:47:10.997668 kubelet[2206]: I0317 18:47:10.997249 2206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:47:11.001435 kubelet[2206]: I0317 18:47:11.001394 2206 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:47:11.005842 kubelet[2206]: I0317 18:47:11.004333 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:47:11.009854 kubelet[2206]: I0317 18:47:11.009059 2206 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:47:11.011617 kubelet[2206]: I0317 18:47:11.011291 2206 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:47:11.011617 kubelet[2206]: I0317 18:47:11.011514 2206 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:47:11.021177 kubelet[2206]: I0317 18:47:11.021055 2206 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:47:11.025154 kubelet[2206]: I0317 18:47:11.025100 2206 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:47:11.026411 kubelet[2206]: I0317 18:47:11.026350 2206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:47:11.036046 kubelet[2206]: I0317 18:47:11.036004 2206 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:47:11.065443 kubelet[2206]: E0317 18:47:11.065364 2206 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:47:11.088203 kubelet[2206]: I0317 18:47:11.088132 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:47:11.100406 kubelet[2206]: I0317 18:47:11.100263 2206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:47:11.100801 kubelet[2206]: I0317 18:47:11.100772 2206 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:47:11.101685 kubelet[2206]: I0317 18:47:11.101649 2206 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:47:11.103521 kubelet[2206]: E0317 18:47:11.103457 2206 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:47:11.125836 kubelet[2206]: I0317 18:47:11.124990 2206 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.157118 kubelet[2206]: I0317 18:47:11.157066 2206 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.160527 kubelet[2206]: I0317 18:47:11.160487 2206 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.204409 kubelet[2206]: E0317 18:47:11.204344 2206 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:47:11.284254 kubelet[2206]: I0317 18:47:11.284195 2206 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:47:11.284546 kubelet[2206]: I0317 18:47:11.284517 2206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:47:11.284697 kubelet[2206]: I0317 18:47:11.284678 2206 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:47:11.285276 kubelet[2206]: I0317 18:47:11.285193 2206 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:47:11.286109 kubelet[2206]: I0317 18:47:11.285686 2206 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:47:11.286392 kubelet[2206]: I0317 18:47:11.286367 2206 policy_none.go:49] "None policy: Start" Mar 17 18:47:11.289322 kubelet[2206]: I0317 18:47:11.289128 2206 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:47:11.289930 kubelet[2206]: I0317 18:47:11.289905 2206 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:47:11.291065 kubelet[2206]: I0317 18:47:11.291032 2206 state_mem.go:75] "Updated machine memory state" Mar 17 18:47:11.297589 kubelet[2206]: I0317 18:47:11.297535 2206 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:47:11.298241 kubelet[2206]: I0317 18:47:11.298140 2206 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:47:11.311745 kubelet[2206]: I0317 18:47:11.311703 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:47:11.405151 kubelet[2206]: I0317 18:47:11.404987 2206 topology_manager.go:215] "Topology Admit Handler" podUID="a522ab8894f4ff996e8d0c5956936809" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.405796 kubelet[2206]: I0317 18:47:11.405743 2206 topology_manager.go:215] "Topology Admit Handler" podUID="82c83ac703e95cd036982facf5cd2cff" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.406158 kubelet[2206]: I0317 18:47:11.406120 2206 topology_manager.go:215] "Topology Admit Handler" podUID="adda34ff83d51806db68cda8ecf55521" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.425987 kubelet[2206]: W0317 18:47:11.425942 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:47:11.426757 kubelet[2206]: E0317 18:47:11.426692 2206 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-0-797a2fde87\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.427045 kubelet[2206]: I0317 18:47:11.426554 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a522ab8894f4ff996e8d0c5956936809-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-0-797a2fde87\" (UID: \"a522ab8894f4ff996e8d0c5956936809\") " pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.427182 kubelet[2206]: W0317 18:47:11.427134 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:47:11.427334 kubelet[2206]: I0317 18:47:11.427304 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.427483 kubelet[2206]: I0317 18:47:11.427452 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.428431 kubelet[2206]: W0317 18:47:11.427804 2206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:47:11.431481 kubelet[2206]: I0317 18:47:11.430694 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.431872 kubelet[2206]: I0317 18:47:11.431811 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adda34ff83d51806db68cda8ecf55521-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-0-797a2fde87\" (UID: \"adda34ff83d51806db68cda8ecf55521\") " pod="kube-system/kube-scheduler-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.432571 kubelet[2206]: I0317 18:47:11.432530 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a522ab8894f4ff996e8d0c5956936809-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-0-797a2fde87\" (UID: \"a522ab8894f4ff996e8d0c5956936809\") " pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.433623 kubelet[2206]: I0317 18:47:11.433543 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.433972 kubelet[2206]: I0317 18:47:11.433928 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82c83ac703e95cd036982facf5cd2cff-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-0-797a2fde87\" (UID: \"82c83ac703e95cd036982facf5cd2cff\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.434311 kubelet[2206]: I0317 18:47:11.434121 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a522ab8894f4ff996e8d0c5956936809-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-0-797a2fde87\" (UID: \"a522ab8894f4ff996e8d0c5956936809\") " pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" Mar 17 18:47:11.737292 kubelet[2206]: E0317 18:47:11.730265 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:11.737292 kubelet[2206]: E0317 18:47:11.731646 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:11.745017 kubelet[2206]: E0317 18:47:11.739507 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:11.981200 kubelet[2206]: I0317 18:47:11.981151 2206 apiserver.go:52] "Watching apiserver" Mar 17 18:47:12.012945 kubelet[2206]: I0317 18:47:12.012642 2206 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:47:12.176257 kubelet[2206]: E0317 18:47:12.176159 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:12.176804 kubelet[2206]: E0317 18:47:12.176761 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:12.178379 kubelet[2206]: E0317 18:47:12.178338 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:12.273617 kubelet[2206]: I0317 18:47:12.272695 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-0-797a2fde87" podStartSLOduration=1.272665116 podStartE2EDuration="1.272665116s" podCreationTimestamp="2025-03-17 18:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:12.271399586 +0000 UTC m=+1.578990746" watchObservedRunningTime="2025-03-17 18:47:12.272665116 +0000 UTC m=+1.580256275" Mar 17 18:47:12.273900 kubelet[2206]: I0317 18:47:12.273641 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-0-797a2fde87" podStartSLOduration=1.273617143 podStartE2EDuration="1.273617143s" podCreationTimestamp="2025-03-17 18:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:12.250328998 +0000 UTC m=+1.557920153" watchObservedRunningTime="2025-03-17 18:47:12.273617143 +0000 UTC m=+1.581208313" Mar 17 18:47:12.458072 sudo[2217]: pam_unix(sudo:session): session closed for user root Mar 17 18:47:13.188017 kubelet[2206]: E0317 18:47:13.187968 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:14.192494 kubelet[2206]: E0317 18:47:14.192447 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:15.203291 kubelet[2206]: E0317 18:47:15.203107 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:15.993986 sudo[1456]: pam_unix(sudo:session): session closed for user root Mar 17 18:47:16.000906 sshd[1450]: pam_unix(sshd:session): session closed for user core Mar 17 18:47:16.010852 systemd[1]: sshd@4-134.199.210.138:22-139.178.68.195:34990.service: Deactivated successfully. Mar 17 18:47:16.017661 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:47:16.018822 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:47:16.028716 systemd-logind[1305]: Removed session 5. Mar 17 18:47:16.203494 kubelet[2206]: E0317 18:47:16.203114 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:17.214237 kubelet[2206]: E0317 18:47:17.214165 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:18.209576 kubelet[2206]: E0317 18:47:18.208765 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:19.443143 kubelet[2206]: E0317 18:47:19.442390 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:20.218352 kubelet[2206]: E0317 18:47:20.217322 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:20.707604 kubelet[2206]: I0317 18:47:20.707555 2206 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:47:20.710472 env[1312]: time="2025-03-17T18:47:20.710302851Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:47:20.711621 kubelet[2206]: I0317 18:47:20.711548 2206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:47:21.645955 kubelet[2206]: I0317 18:47:21.645868 2206 topology_manager.go:215] "Topology Admit Handler" podUID="fea17960-1b43-4cb2-b0c6-603073e159de" podNamespace="kube-system" podName="kube-proxy-6hjq8" Mar 17 18:47:21.665042 kubelet[2206]: I0317 18:47:21.664957 2206 topology_manager.go:215] "Topology Admit Handler" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" podNamespace="kube-system" podName="cilium-lk9n5" Mar 17 18:47:21.704250 kubelet[2206]: I0317 18:47:21.704183 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hostproc\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.704618 kubelet[2206]: I0317 18:47:21.704581 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-etc-cni-netd\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.704841 kubelet[2206]: I0317 18:47:21.704814 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qjwt\" (UniqueName: \"kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-kube-api-access-2qjwt\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.704988 kubelet[2206]: I0317 18:47:21.704965 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-bpf-maps\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705127 kubelet[2206]: I0317 18:47:21.705103 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-config-path\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705311 kubelet[2206]: I0317 18:47:21.705283 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-net\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705544 kubelet[2206]: I0317 18:47:21.705507 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-cgroup\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705630 kubelet[2206]: I0317 18:47:21.705553 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea4895a4-e76a-4f76-a116-9b702eae9ff2-clustermesh-secrets\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705630 kubelet[2206]: I0317 18:47:21.705578 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fea17960-1b43-4cb2-b0c6-603073e159de-kube-proxy\") pod \"kube-proxy-6hjq8\" (UID: \"fea17960-1b43-4cb2-b0c6-603073e159de\") " pod="kube-system/kube-proxy-6hjq8" Mar 17 18:47:21.705630 kubelet[2206]: I0317 18:47:21.705596 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fea17960-1b43-4cb2-b0c6-603073e159de-xtables-lock\") pod \"kube-proxy-6hjq8\" (UID: \"fea17960-1b43-4cb2-b0c6-603073e159de\") " pod="kube-system/kube-proxy-6hjq8" Mar 17 18:47:21.705630 kubelet[2206]: I0317 18:47:21.705613 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea17960-1b43-4cb2-b0c6-603073e159de-lib-modules\") pod \"kube-proxy-6hjq8\" (UID: \"fea17960-1b43-4cb2-b0c6-603073e159de\") " pod="kube-system/kube-proxy-6hjq8" Mar 17 18:47:21.705841 kubelet[2206]: I0317 18:47:21.705632 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-xtables-lock\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705841 kubelet[2206]: I0317 18:47:21.705652 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-kernel\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705841 kubelet[2206]: I0317 18:47:21.705705 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hubble-tls\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705841 kubelet[2206]: I0317 18:47:21.705723 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-run\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705841 kubelet[2206]: I0317 18:47:21.705750 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cni-path\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.705841 kubelet[2206]: I0317 18:47:21.705776 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-lib-modules\") pod \"cilium-lk9n5\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " pod="kube-system/cilium-lk9n5" Mar 17 18:47:21.706185 kubelet[2206]: I0317 18:47:21.705807 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjsp\" (UniqueName: \"kubernetes.io/projected/fea17960-1b43-4cb2-b0c6-603073e159de-kube-api-access-xjjsp\") pod \"kube-proxy-6hjq8\" (UID: \"fea17960-1b43-4cb2-b0c6-603073e159de\") " pod="kube-system/kube-proxy-6hjq8" Mar 17 18:47:21.798068 kubelet[2206]: I0317 18:47:21.797888 2206 topology_manager.go:215] "Topology Admit Handler" podUID="050d1bc5-ff7b-4af6-92fb-9512ec9e97d1" podNamespace="kube-system" podName="cilium-operator-599987898-2tdnz" Mar 17 18:47:21.910445 kubelet[2206]: I0317 18:47:21.910272 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-cilium-config-path\") pod \"cilium-operator-599987898-2tdnz\" (UID: \"050d1bc5-ff7b-4af6-92fb-9512ec9e97d1\") " pod="kube-system/cilium-operator-599987898-2tdnz" Mar 17 18:47:21.910825 kubelet[2206]: I0317 18:47:21.910781 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ds45\" (UniqueName: \"kubernetes.io/projected/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-kube-api-access-4ds45\") pod \"cilium-operator-599987898-2tdnz\" (UID: \"050d1bc5-ff7b-4af6-92fb-9512ec9e97d1\") " pod="kube-system/cilium-operator-599987898-2tdnz" Mar 17 18:47:21.979637 kubelet[2206]: E0317 18:47:21.979592 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:21.984490 env[1312]: time="2025-03-17T18:47:21.983696176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lk9n5,Uid:ea4895a4-e76a-4f76-a116-9b702eae9ff2,Namespace:kube-system,Attempt:0,}" Mar 17 18:47:21.993475 kubelet[2206]: E0317 18:47:21.992909 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:22.005440 env[1312]: time="2025-03-17T18:47:22.004561994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6hjq8,Uid:fea17960-1b43-4cb2-b0c6-603073e159de,Namespace:kube-system,Attempt:0,}" Mar 17 18:47:22.079186 env[1312]: time="2025-03-17T18:47:22.079020504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:22.079186 env[1312]: time="2025-03-17T18:47:22.079099829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:22.079567 env[1312]: time="2025-03-17T18:47:22.079128228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:22.079567 env[1312]: time="2025-03-17T18:47:22.079384639Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee pid=2289 runtime=io.containerd.runc.v2 Mar 17 18:47:22.143579 env[1312]: time="2025-03-17T18:47:22.143354929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:22.144366 env[1312]: time="2025-03-17T18:47:22.144187953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:22.144674 env[1312]: time="2025-03-17T18:47:22.144610573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:22.145168 env[1312]: time="2025-03-17T18:47:22.145114324Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ae1a50e47328b9ad675bb892abb8ae2d19b77235a291b51a4fb30fac61ad60d pid=2311 runtime=io.containerd.runc.v2 Mar 17 18:47:22.209582 env[1312]: time="2025-03-17T18:47:22.207625783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lk9n5,Uid:ea4895a4-e76a-4f76-a116-9b702eae9ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\"" Mar 17 18:47:22.215800 kubelet[2206]: E0317 18:47:22.215322 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:22.219029 env[1312]: time="2025-03-17T18:47:22.218788139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:47:22.271840 env[1312]: time="2025-03-17T18:47:22.271762255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6hjq8,Uid:fea17960-1b43-4cb2-b0c6-603073e159de,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae1a50e47328b9ad675bb892abb8ae2d19b77235a291b51a4fb30fac61ad60d\"" Mar 17 18:47:22.273977 kubelet[2206]: E0317 18:47:22.273492 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:22.289435 env[1312]: time="2025-03-17T18:47:22.289365437Z" level=info msg="CreateContainer within sandbox \"6ae1a50e47328b9ad675bb892abb8ae2d19b77235a291b51a4fb30fac61ad60d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:47:22.338566 env[1312]: time="2025-03-17T18:47:22.338480469Z" level=info msg="CreateContainer within sandbox \"6ae1a50e47328b9ad675bb892abb8ae2d19b77235a291b51a4fb30fac61ad60d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0fa847ab6f21df7faaaa8a19204eeed9d56566fe7505bf0dca3921cd2edfd6c4\"" Mar 17 18:47:22.342994 env[1312]: time="2025-03-17T18:47:22.342802283Z" level=info msg="StartContainer for \"0fa847ab6f21df7faaaa8a19204eeed9d56566fe7505bf0dca3921cd2edfd6c4\"" Mar 17 18:47:22.418923 kubelet[2206]: E0317 18:47:22.413055 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:22.419176 env[1312]: time="2025-03-17T18:47:22.414365150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2tdnz,Uid:050d1bc5-ff7b-4af6-92fb-9512ec9e97d1,Namespace:kube-system,Attempt:0,}" Mar 17 18:47:22.449683 env[1312]: time="2025-03-17T18:47:22.449570565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:22.449683 env[1312]: time="2025-03-17T18:47:22.449622763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:22.449925 env[1312]: time="2025-03-17T18:47:22.449663166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:22.450018 env[1312]: time="2025-03-17T18:47:22.449990655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58 pid=2399 runtime=io.containerd.runc.v2 Mar 17 18:47:22.502203 env[1312]: time="2025-03-17T18:47:22.496422939Z" level=info msg="StartContainer for \"0fa847ab6f21df7faaaa8a19204eeed9d56566fe7505bf0dca3921cd2edfd6c4\" returns successfully" Mar 17 18:47:22.574166 env[1312]: time="2025-03-17T18:47:22.574061138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-2tdnz,Uid:050d1bc5-ff7b-4af6-92fb-9512ec9e97d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58\"" Mar 17 18:47:22.577726 kubelet[2206]: E0317 18:47:22.576481 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:23.294306 kubelet[2206]: E0317 18:47:23.293733 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:30.768525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161539984.mount: Deactivated successfully. Mar 17 18:47:36.133602 env[1312]: time="2025-03-17T18:47:36.133525315Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:47:36.138645 env[1312]: time="2025-03-17T18:47:36.138566295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:47:36.142532 env[1312]: time="2025-03-17T18:47:36.142465502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:47:36.144294 env[1312]: time="2025-03-17T18:47:36.144190978Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:47:36.151237 env[1312]: time="2025-03-17T18:47:36.151083319Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:47:36.159585 env[1312]: time="2025-03-17T18:47:36.159516845Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:47:36.243113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998762976.mount: Deactivated successfully. Mar 17 18:47:36.259899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820758029.mount: Deactivated successfully. Mar 17 18:47:36.271252 env[1312]: time="2025-03-17T18:47:36.266936289Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\"" Mar 17 18:47:36.276069 env[1312]: time="2025-03-17T18:47:36.275973894Z" level=info msg="StartContainer for \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\"" Mar 17 18:47:36.407021 env[1312]: time="2025-03-17T18:47:36.406044753Z" level=info msg="StartContainer for \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\" returns successfully" Mar 17 18:47:36.515502 env[1312]: time="2025-03-17T18:47:36.515429062Z" level=info msg="shim disconnected" id=92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557 Mar 17 18:47:36.515968 env[1312]: time="2025-03-17T18:47:36.515921634Z" level=warning msg="cleaning up after shim disconnected" id=92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557 namespace=k8s.io Mar 17 18:47:36.516140 env[1312]: time="2025-03-17T18:47:36.516114968Z" level=info msg="cleaning up dead shim" Mar 17 18:47:36.553930 env[1312]: time="2025-03-17T18:47:36.553251675Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:47:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2618 runtime=io.containerd.runc.v2\n" Mar 17 18:47:37.235480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557-rootfs.mount: Deactivated successfully. Mar 17 18:47:37.448013 kubelet[2206]: E0317 18:47:37.445525 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:37.490087 env[1312]: time="2025-03-17T18:47:37.463631384Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:47:37.530431 kubelet[2206]: I0317 18:47:37.527698 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6hjq8" podStartSLOduration=16.527666428 podStartE2EDuration="16.527666428s" podCreationTimestamp="2025-03-17 18:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:23.324015772 +0000 UTC m=+12.631606929" watchObservedRunningTime="2025-03-17 18:47:37.527666428 +0000 UTC m=+26.835257587" Mar 17 18:47:37.575927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031514753.mount: Deactivated successfully. Mar 17 18:47:37.618714 env[1312]: time="2025-03-17T18:47:37.617114099Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\"" Mar 17 18:47:37.622777 env[1312]: time="2025-03-17T18:47:37.622717541Z" level=info msg="StartContainer for \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\"" Mar 17 18:47:37.771673 env[1312]: time="2025-03-17T18:47:37.771072605Z" level=info msg="StartContainer for \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\" returns successfully" Mar 17 18:47:37.794436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:47:37.800261 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:47:37.800956 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:47:37.807521 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:47:37.839164 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:47:37.933471 env[1312]: time="2025-03-17T18:47:37.933405409Z" level=info msg="shim disconnected" id=bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca Mar 17 18:47:37.934158 env[1312]: time="2025-03-17T18:47:37.934114979Z" level=warning msg="cleaning up after shim disconnected" id=bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca namespace=k8s.io Mar 17 18:47:37.934375 env[1312]: time="2025-03-17T18:47:37.934346992Z" level=info msg="cleaning up dead shim" Mar 17 18:47:38.006101 env[1312]: time="2025-03-17T18:47:38.006031650Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:47:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2685 runtime=io.containerd.runc.v2\n" Mar 17 18:47:38.463706 kubelet[2206]: E0317 18:47:38.451384 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:38.464452 env[1312]: time="2025-03-17T18:47:38.458859870Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:47:38.577434 env[1312]: time="2025-03-17T18:47:38.577350543Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\"" Mar 17 18:47:38.580149 env[1312]: time="2025-03-17T18:47:38.580099038Z" level=info msg="StartContainer for \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\"" Mar 17 18:47:38.810881 env[1312]: time="2025-03-17T18:47:38.810451122Z" level=info msg="StartContainer for \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\" returns successfully" Mar 17 18:47:38.868749 env[1312]: time="2025-03-17T18:47:38.868680141Z" level=info msg="shim disconnected" id=c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1 Mar 17 18:47:38.869365 env[1312]: time="2025-03-17T18:47:38.869193948Z" level=warning msg="cleaning up after shim disconnected" id=c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1 namespace=k8s.io Mar 17 18:47:38.869568 env[1312]: time="2025-03-17T18:47:38.869534697Z" level=info msg="cleaning up dead shim" Mar 17 18:47:38.911202 env[1312]: time="2025-03-17T18:47:38.910888980Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:47:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2746 runtime=io.containerd.runc.v2\n" Mar 17 18:47:39.234383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1-rootfs.mount: Deactivated successfully. Mar 17 18:47:39.313361 env[1312]: time="2025-03-17T18:47:39.311323739Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:47:39.313361 env[1312]: time="2025-03-17T18:47:39.311817276Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:47:39.313361 env[1312]: time="2025-03-17T18:47:39.313103667Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:47:39.314141 env[1312]: time="2025-03-17T18:47:39.313963684Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:47:39.325363 env[1312]: time="2025-03-17T18:47:39.325291897Z" level=info msg="CreateContainer within sandbox \"261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:47:39.354807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135371594.mount: Deactivated successfully. Mar 17 18:47:39.378907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676734802.mount: Deactivated successfully. Mar 17 18:47:39.392263 env[1312]: time="2025-03-17T18:47:39.391747481Z" level=info msg="CreateContainer within sandbox \"261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\"" Mar 17 18:47:39.397030 env[1312]: time="2025-03-17T18:47:39.396859866Z" level=info msg="StartContainer for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\"" Mar 17 18:47:39.469901 kubelet[2206]: E0317 18:47:39.469825 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:39.474399 env[1312]: time="2025-03-17T18:47:39.474306854Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:47:39.566016 env[1312]: time="2025-03-17T18:47:39.560063709Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\"" Mar 17 18:47:39.566016 env[1312]: time="2025-03-17T18:47:39.561652272Z" level=info msg="StartContainer for \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\"" Mar 17 18:47:39.588071 env[1312]: time="2025-03-17T18:47:39.587973647Z" level=info msg="StartContainer for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" returns successfully" Mar 17 18:47:39.767542 env[1312]: time="2025-03-17T18:47:39.766692811Z" level=info msg="StartContainer for \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\" returns successfully" Mar 17 18:47:39.825558 env[1312]: time="2025-03-17T18:47:39.824396825Z" level=info msg="shim disconnected" id=8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9 Mar 17 18:47:39.825558 env[1312]: time="2025-03-17T18:47:39.824456828Z" level=warning msg="cleaning up after shim disconnected" id=8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9 namespace=k8s.io Mar 17 18:47:39.825558 env[1312]: time="2025-03-17T18:47:39.824471940Z" level=info msg="cleaning up dead shim" Mar 17 18:47:39.898406 env[1312]: time="2025-03-17T18:47:39.898337971Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:47:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2841 runtime=io.containerd.runc.v2\n" Mar 17 18:47:40.482394 kubelet[2206]: E0317 18:47:40.482331 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:40.497781 env[1312]: time="2025-03-17T18:47:40.497705366Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:47:40.498683 kubelet[2206]: E0317 18:47:40.498639 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:40.543101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234033012.mount: Deactivated successfully. Mar 17 18:47:40.571291 env[1312]: time="2025-03-17T18:47:40.571184875Z" level=info msg="CreateContainer within sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\"" Mar 17 18:47:40.580696 env[1312]: time="2025-03-17T18:47:40.580623989Z" level=info msg="StartContainer for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\"" Mar 17 18:47:40.871113 env[1312]: time="2025-03-17T18:47:40.870482133Z" level=info msg="StartContainer for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" returns successfully" Mar 17 18:47:41.532905 kubelet[2206]: I0317 18:47:41.532825 2206 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:47:41.550728 kubelet[2206]: E0317 18:47:41.550664 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:41.552533 kubelet[2206]: E0317 18:47:41.552476 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:41.687498 kubelet[2206]: I0317 18:47:41.687387 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-2tdnz" podStartSLOduration=3.949381387 podStartE2EDuration="20.687351744s" podCreationTimestamp="2025-03-17 18:47:21 +0000 UTC" firstStartedPulling="2025-03-17 18:47:22.578051012 +0000 UTC m=+11.885642147" lastFinishedPulling="2025-03-17 18:47:39.316021359 +0000 UTC m=+28.623612504" observedRunningTime="2025-03-17 18:47:40.928593145 +0000 UTC m=+30.236184299" watchObservedRunningTime="2025-03-17 18:47:41.687351744 +0000 UTC m=+30.994942911" Mar 17 18:47:41.687814 kubelet[2206]: I0317 18:47:41.687750 2206 topology_manager.go:215] "Topology Admit Handler" podUID="6ffbd3c1-66dd-4aee-a7eb-bdf887482936" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9vr5n" Mar 17 18:47:41.694096 kubelet[2206]: I0317 18:47:41.694030 2206 topology_manager.go:215] "Topology Admit Handler" podUID="7a973591-d36a-4015-af73-b2575a5168cb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2pnbg" Mar 17 18:47:41.708521 kubelet[2206]: W0317 18:47:41.708453 2206 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.7-0-797a2fde87" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-0-797a2fde87' and this object Mar 17 18:47:41.708824 kubelet[2206]: E0317 18:47:41.708796 2206 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.7-0-797a2fde87" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.7-0-797a2fde87' and this object Mar 17 18:47:41.777567 kubelet[2206]: I0317 18:47:41.777486 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lk9n5" podStartSLOduration=6.8486140639999995 podStartE2EDuration="20.777457997s" podCreationTimestamp="2025-03-17 18:47:21 +0000 UTC" firstStartedPulling="2025-03-17 18:47:22.217905834 +0000 UTC m=+11.525496968" lastFinishedPulling="2025-03-17 18:47:36.146749753 +0000 UTC m=+25.454340901" observedRunningTime="2025-03-17 18:47:41.712133376 +0000 UTC m=+31.019724533" watchObservedRunningTime="2025-03-17 18:47:41.777457997 +0000 UTC m=+31.085049152" Mar 17 18:47:41.834778 kubelet[2206]: I0317 18:47:41.834616 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dr7n\" (UniqueName: \"kubernetes.io/projected/7a973591-d36a-4015-af73-b2575a5168cb-kube-api-access-5dr7n\") pod \"coredns-7db6d8ff4d-2pnbg\" (UID: \"7a973591-d36a-4015-af73-b2575a5168cb\") " pod="kube-system/coredns-7db6d8ff4d-2pnbg" Mar 17 18:47:41.835329 kubelet[2206]: I0317 18:47:41.835300 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a973591-d36a-4015-af73-b2575a5168cb-config-volume\") pod \"coredns-7db6d8ff4d-2pnbg\" (UID: \"7a973591-d36a-4015-af73-b2575a5168cb\") " pod="kube-system/coredns-7db6d8ff4d-2pnbg" Mar 17 18:47:41.835514 kubelet[2206]: I0317 18:47:41.835471 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ffbd3c1-66dd-4aee-a7eb-bdf887482936-config-volume\") pod \"coredns-7db6d8ff4d-9vr5n\" (UID: \"6ffbd3c1-66dd-4aee-a7eb-bdf887482936\") " pod="kube-system/coredns-7db6d8ff4d-9vr5n" Mar 17 18:47:41.835720 kubelet[2206]: I0317 18:47:41.835697 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2qkg\" (UniqueName: \"kubernetes.io/projected/6ffbd3c1-66dd-4aee-a7eb-bdf887482936-kube-api-access-s2qkg\") pod \"coredns-7db6d8ff4d-9vr5n\" (UID: \"6ffbd3c1-66dd-4aee-a7eb-bdf887482936\") " pod="kube-system/coredns-7db6d8ff4d-9vr5n" Mar 17 18:47:42.552802 kubelet[2206]: E0317 18:47:42.552756 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:42.592750 kubelet[2206]: E0317 18:47:42.592624 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:42.596825 env[1312]: time="2025-03-17T18:47:42.596012199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vr5n,Uid:6ffbd3c1-66dd-4aee-a7eb-bdf887482936,Namespace:kube-system,Attempt:0,}" Mar 17 18:47:42.617278 kubelet[2206]: E0317 18:47:42.613589 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:42.618934 env[1312]: time="2025-03-17T18:47:42.618395352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2pnbg,Uid:7a973591-d36a-4015-af73-b2575a5168cb,Namespace:kube-system,Attempt:0,}" Mar 17 18:47:43.556332 kubelet[2206]: E0317 18:47:43.556268 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:44.619326 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:47:44.621366 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:47:44.622395 systemd-networkd[1080]: cilium_host: Link UP Mar 17 18:47:44.622745 systemd-networkd[1080]: cilium_net: Link UP Mar 17 18:47:44.623021 systemd-networkd[1080]: cilium_net: Gained carrier Mar 17 18:47:44.624833 systemd-networkd[1080]: cilium_host: Gained carrier Mar 17 18:47:44.716753 systemd-networkd[1080]: cilium_net: Gained IPv6LL Mar 17 18:47:44.878712 systemd-networkd[1080]: cilium_vxlan: Link UP Mar 17 18:47:44.878724 systemd-networkd[1080]: cilium_vxlan: Gained carrier Mar 17 18:47:44.995491 systemd-networkd[1080]: cilium_host: Gained IPv6LL Mar 17 18:47:45.499253 kernel: NET: Registered PF_ALG protocol family Mar 17 18:47:46.251456 systemd-networkd[1080]: cilium_vxlan: Gained IPv6LL Mar 17 18:47:46.848437 systemd-networkd[1080]: lxc_health: Link UP Mar 17 18:47:46.857324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:47:46.857954 systemd-networkd[1080]: lxc_health: Gained carrier Mar 17 18:47:47.301753 systemd-networkd[1080]: lxc34006272f5e6: Link UP Mar 17 18:47:47.311414 kernel: eth0: renamed from tmp99029 Mar 17 18:47:47.319043 systemd-networkd[1080]: lxc34006272f5e6: Gained carrier Mar 17 18:47:47.319368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc34006272f5e6: link becomes ready Mar 17 18:47:47.342517 systemd-networkd[1080]: lxc7ee45916a770: Link UP Mar 17 18:47:47.361249 kernel: eth0: renamed from tmp6bbd4 Mar 17 18:47:47.367341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7ee45916a770: link becomes ready Mar 17 18:47:47.367170 systemd-networkd[1080]: lxc7ee45916a770: Gained carrier Mar 17 18:47:47.987025 kubelet[2206]: E0317 18:47:47.986969 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:48.253380 systemd-networkd[1080]: lxc_health: Gained IPv6LL Mar 17 18:47:48.427463 systemd-networkd[1080]: lxc7ee45916a770: Gained IPv6LL Mar 17 18:47:48.568829 kubelet[2206]: E0317 18:47:48.568664 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:49.003484 systemd-networkd[1080]: lxc34006272f5e6: Gained IPv6LL Mar 17 18:47:49.572906 kubelet[2206]: E0317 18:47:49.572846 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:54.381273 env[1312]: time="2025-03-17T18:47:54.381145864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:54.381273 env[1312]: time="2025-03-17T18:47:54.381276850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:54.382316 env[1312]: time="2025-03-17T18:47:54.381305527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:54.383600 env[1312]: time="2025-03-17T18:47:54.383482233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/990290aa6ff46bce4aab6ad6eea77aec5fa11220b8c8a42bfd013be92681eaa0 pid=3396 runtime=io.containerd.runc.v2 Mar 17 18:47:54.448682 systemd[1]: run-containerd-runc-k8s.io-990290aa6ff46bce4aab6ad6eea77aec5fa11220b8c8a42bfd013be92681eaa0-runc.rjtyUs.mount: Deactivated successfully. Mar 17 18:47:54.516826 env[1312]: time="2025-03-17T18:47:54.515050928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:47:54.516826 env[1312]: time="2025-03-17T18:47:54.515130588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:47:54.516826 env[1312]: time="2025-03-17T18:47:54.515150538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:47:54.516826 env[1312]: time="2025-03-17T18:47:54.515784427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bbd4a683efd835270c58dbc3265554535741604c5a32567746a3b69d0d0f1fa pid=3430 runtime=io.containerd.runc.v2 Mar 17 18:47:54.618053 env[1312]: time="2025-03-17T18:47:54.617705950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2pnbg,Uid:7a973591-d36a-4015-af73-b2575a5168cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"990290aa6ff46bce4aab6ad6eea77aec5fa11220b8c8a42bfd013be92681eaa0\"" Mar 17 18:47:54.622334 kubelet[2206]: E0317 18:47:54.621508 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:54.630701 env[1312]: time="2025-03-17T18:47:54.630613594Z" level=info msg="CreateContainer within sandbox \"990290aa6ff46bce4aab6ad6eea77aec5fa11220b8c8a42bfd013be92681eaa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:47:54.686663 env[1312]: time="2025-03-17T18:47:54.686492803Z" level=info msg="CreateContainer within sandbox \"990290aa6ff46bce4aab6ad6eea77aec5fa11220b8c8a42bfd013be92681eaa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c1715b7cc9e17bc6e667b2a27d7b6e1735b9b2607ea4e6d71f6f1b4b8d67cfd\"" Mar 17 18:47:54.688555 env[1312]: time="2025-03-17T18:47:54.688498958Z" level=info msg="StartContainer for \"4c1715b7cc9e17bc6e667b2a27d7b6e1735b9b2607ea4e6d71f6f1b4b8d67cfd\"" Mar 17 18:47:54.717016 env[1312]: time="2025-03-17T18:47:54.716915485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vr5n,Uid:6ffbd3c1-66dd-4aee-a7eb-bdf887482936,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bbd4a683efd835270c58dbc3265554535741604c5a32567746a3b69d0d0f1fa\"" Mar 17 18:47:54.720156 kubelet[2206]: E0317 18:47:54.718737 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:54.732071 env[1312]: time="2025-03-17T18:47:54.732001627Z" level=info msg="CreateContainer within sandbox \"6bbd4a683efd835270c58dbc3265554535741604c5a32567746a3b69d0d0f1fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:47:54.768466 env[1312]: time="2025-03-17T18:47:54.768398104Z" level=info msg="CreateContainer within sandbox \"6bbd4a683efd835270c58dbc3265554535741604c5a32567746a3b69d0d0f1fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"afa02e52087df22690ae6abdf53ef4e237cce4b77668d943bc5564eb224558f0\"" Mar 17 18:47:54.775278 env[1312]: time="2025-03-17T18:47:54.774419609Z" level=info msg="StartContainer for \"afa02e52087df22690ae6abdf53ef4e237cce4b77668d943bc5564eb224558f0\"" Mar 17 18:47:54.863458 env[1312]: time="2025-03-17T18:47:54.863377831Z" level=info msg="StartContainer for \"4c1715b7cc9e17bc6e667b2a27d7b6e1735b9b2607ea4e6d71f6f1b4b8d67cfd\" returns successfully" Mar 17 18:47:54.903506 env[1312]: time="2025-03-17T18:47:54.903254392Z" level=info msg="StartContainer for \"afa02e52087df22690ae6abdf53ef4e237cce4b77668d943bc5564eb224558f0\" returns successfully" Mar 17 18:47:55.394228 systemd[1]: run-containerd-runc-k8s.io-6bbd4a683efd835270c58dbc3265554535741604c5a32567746a3b69d0d0f1fa-runc.Bh6LIR.mount: Deactivated successfully. Mar 17 18:47:55.641435 kubelet[2206]: E0317 18:47:55.641383 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:55.670451 kubelet[2206]: E0317 18:47:55.663591 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:55.684389 kubelet[2206]: I0317 18:47:55.678410 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2pnbg" podStartSLOduration=34.678356093 podStartE2EDuration="34.678356093s" podCreationTimestamp="2025-03-17 18:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:55.671332716 +0000 UTC m=+44.978923868" watchObservedRunningTime="2025-03-17 18:47:55.678356093 +0000 UTC m=+44.985947251" Mar 17 18:47:55.700904 kubelet[2206]: I0317 18:47:55.700802 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9vr5n" podStartSLOduration=34.700760818 podStartE2EDuration="34.700760818s" podCreationTimestamp="2025-03-17 18:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:47:55.699895421 +0000 UTC m=+45.007486575" watchObservedRunningTime="2025-03-17 18:47:55.700760818 +0000 UTC m=+45.008351977" Mar 17 18:47:56.674012 kubelet[2206]: E0317 18:47:56.673955 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:56.674921 kubelet[2206]: E0317 18:47:56.674890 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:57.677262 kubelet[2206]: E0317 18:47:57.677196 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:47:57.679107 kubelet[2206]: E0317 18:47:57.679066 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:48:07.408281 systemd[1]: Started sshd@5-134.199.210.138:22-139.178.68.195:59878.service. Mar 17 18:48:07.565926 sshd[3550]: Accepted publickey for core from 139.178.68.195 port 59878 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:07.570944 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:07.590200 systemd[1]: Started session-6.scope. Mar 17 18:48:07.594139 systemd-logind[1305]: New session 6 of user core. Mar 17 18:48:08.172571 sshd[3550]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:08.198974 systemd[1]: sshd@5-134.199.210.138:22-139.178.68.195:59878.service: Deactivated successfully. Mar 17 18:48:08.203759 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:48:08.204463 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:48:08.206653 systemd-logind[1305]: Removed session 6. Mar 17 18:48:13.173777 systemd[1]: Started sshd@6-134.199.210.138:22-139.178.68.195:59892.service. Mar 17 18:48:13.243631 sshd[3566]: Accepted publickey for core from 139.178.68.195 port 59892 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:13.246360 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:13.254706 systemd-logind[1305]: New session 7 of user core. Mar 17 18:48:13.256525 systemd[1]: Started session-7.scope. Mar 17 18:48:13.501712 sshd[3566]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:13.506615 systemd[1]: sshd@6-134.199.210.138:22-139.178.68.195:59892.service: Deactivated successfully. Mar 17 18:48:13.509384 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:48:13.510715 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:48:13.513055 systemd-logind[1305]: Removed session 7. Mar 17 18:48:18.514181 systemd[1]: Started sshd@7-134.199.210.138:22-139.178.68.195:40200.service. Mar 17 18:48:18.589119 sshd[3581]: Accepted publickey for core from 139.178.68.195 port 40200 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:18.593028 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:18.613370 systemd[1]: Started session-8.scope. Mar 17 18:48:18.615328 systemd-logind[1305]: New session 8 of user core. Mar 17 18:48:18.832252 sshd[3581]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:18.840299 systemd[1]: sshd@7-134.199.210.138:22-139.178.68.195:40200.service: Deactivated successfully. Mar 17 18:48:18.842140 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:48:18.846937 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:48:18.849356 systemd-logind[1305]: Removed session 8. Mar 17 18:48:22.104990 kubelet[2206]: E0317 18:48:22.104913 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:48:23.844555 systemd[1]: Started sshd@8-134.199.210.138:22-139.178.68.195:40216.service. Mar 17 18:48:23.938883 sshd[3597]: Accepted publickey for core from 139.178.68.195 port 40216 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:23.942391 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:23.955641 systemd[1]: Started session-9.scope. Mar 17 18:48:23.957738 systemd-logind[1305]: New session 9 of user core. Mar 17 18:48:24.214591 sshd[3597]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:24.220430 systemd[1]: Started sshd@9-134.199.210.138:22-139.178.68.195:40218.service. Mar 17 18:48:24.230907 systemd[1]: sshd@8-134.199.210.138:22-139.178.68.195:40216.service: Deactivated successfully. Mar 17 18:48:24.235176 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:48:24.236304 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:48:24.241051 systemd-logind[1305]: Removed session 9. Mar 17 18:48:24.302353 sshd[3608]: Accepted publickey for core from 139.178.68.195 port 40218 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:24.305191 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:24.320509 systemd[1]: Started session-10.scope. Mar 17 18:48:24.320902 systemd-logind[1305]: New session 10 of user core. Mar 17 18:48:24.823439 systemd[1]: Started sshd@10-134.199.210.138:22-139.178.68.195:40226.service. Mar 17 18:48:24.831389 sshd[3608]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:24.869959 systemd[1]: sshd@9-134.199.210.138:22-139.178.68.195:40218.service: Deactivated successfully. Mar 17 18:48:24.871583 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:48:24.909291 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:48:24.930015 systemd-logind[1305]: Removed session 10. Mar 17 18:48:25.048696 sshd[3619]: Accepted publickey for core from 139.178.68.195 port 40226 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:25.058045 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:25.071487 systemd-logind[1305]: New session 11 of user core. Mar 17 18:48:25.073315 systemd[1]: Started session-11.scope. Mar 17 18:48:25.107241 kubelet[2206]: E0317 18:48:25.107190 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:48:25.426834 sshd[3619]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:25.432650 systemd[1]: sshd@10-134.199.210.138:22-139.178.68.195:40226.service: Deactivated successfully. Mar 17 18:48:25.434846 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:48:25.435466 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:48:25.440136 systemd-logind[1305]: Removed session 11. Mar 17 18:48:30.434363 systemd[1]: Started sshd@11-134.199.210.138:22-139.178.68.195:49052.service. Mar 17 18:48:30.519837 sshd[3633]: Accepted publickey for core from 139.178.68.195 port 49052 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:30.523592 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:30.536033 systemd[1]: Started session-12.scope. Mar 17 18:48:30.538356 systemd-logind[1305]: New session 12 of user core. Mar 17 18:48:30.777560 sshd[3633]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:30.784278 systemd[1]: sshd@11-134.199.210.138:22-139.178.68.195:49052.service: Deactivated successfully. Mar 17 18:48:30.789081 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:48:30.789186 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:48:30.793044 systemd-logind[1305]: Removed session 12. Mar 17 18:48:35.782943 systemd[1]: Started sshd@12-134.199.210.138:22-139.178.68.195:43292.service. Mar 17 18:48:35.845038 sshd[3646]: Accepted publickey for core from 139.178.68.195 port 43292 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:35.848917 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:35.859998 systemd-logind[1305]: New session 13 of user core. Mar 17 18:48:35.861632 systemd[1]: Started session-13.scope. Mar 17 18:48:36.046785 sshd[3646]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:36.052823 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:48:36.053080 systemd[1]: sshd@12-134.199.210.138:22-139.178.68.195:43292.service: Deactivated successfully. Mar 17 18:48:36.055006 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:48:36.056303 systemd-logind[1305]: Removed session 13. Mar 17 18:48:36.104706 kubelet[2206]: E0317 18:48:36.104656 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:48:41.052719 systemd[1]: Started sshd@13-134.199.210.138:22-139.178.68.195:43308.service. Mar 17 18:48:41.132300 sshd[3659]: Accepted publickey for core from 139.178.68.195 port 43308 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:41.138992 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:41.159019 systemd-logind[1305]: New session 14 of user core. Mar 17 18:48:41.160815 systemd[1]: Started session-14.scope. Mar 17 18:48:41.391542 sshd[3659]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:41.397668 systemd[1]: Started sshd@14-134.199.210.138:22-139.178.68.195:43316.service. Mar 17 18:48:41.404809 systemd[1]: sshd@13-134.199.210.138:22-139.178.68.195:43308.service: Deactivated successfully. Mar 17 18:48:41.409841 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:48:41.411784 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:48:41.414752 systemd-logind[1305]: Removed session 14. Mar 17 18:48:41.478459 sshd[3670]: Accepted publickey for core from 139.178.68.195 port 43316 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:41.481755 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:41.490757 systemd[1]: Started session-15.scope. Mar 17 18:48:41.492335 systemd-logind[1305]: New session 15 of user core. Mar 17 18:48:42.106622 kubelet[2206]: E0317 18:48:42.106560 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:48:42.112682 sshd[3670]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:42.119610 systemd[1]: Started sshd@15-134.199.210.138:22-139.178.68.195:43322.service. Mar 17 18:48:42.144193 systemd[1]: sshd@14-134.199.210.138:22-139.178.68.195:43316.service: Deactivated successfully. Mar 17 18:48:42.146344 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:48:42.147040 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:48:42.148715 systemd-logind[1305]: Removed session 15. Mar 17 18:48:42.218929 sshd[3681]: Accepted publickey for core from 139.178.68.195 port 43322 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:42.222760 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:42.235287 systemd[1]: Started session-16.scope. Mar 17 18:48:42.237957 systemd-logind[1305]: New session 16 of user core. Mar 17 18:48:45.028849 sshd[3681]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:45.038080 systemd[1]: Started sshd@16-134.199.210.138:22-139.178.68.195:43336.service. Mar 17 18:48:45.050539 systemd[1]: sshd@15-134.199.210.138:22-139.178.68.195:43322.service: Deactivated successfully. Mar 17 18:48:45.054419 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:48:45.057737 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:48:45.065147 systemd-logind[1305]: Removed session 16. Mar 17 18:48:45.179913 sshd[3699]: Accepted publickey for core from 139.178.68.195 port 43336 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:45.190801 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:45.210892 systemd-logind[1305]: New session 17 of user core. Mar 17 18:48:45.212735 systemd[1]: Started session-17.scope. Mar 17 18:48:46.159492 systemd[1]: Started sshd@17-134.199.210.138:22-139.178.68.195:42676.service. Mar 17 18:48:46.168470 sshd[3699]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:46.177972 systemd[1]: sshd@16-134.199.210.138:22-139.178.68.195:43336.service: Deactivated successfully. Mar 17 18:48:46.194337 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:48:46.211668 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:48:46.230519 systemd-logind[1305]: Removed session 17. Mar 17 18:48:46.302304 sshd[3711]: Accepted publickey for core from 139.178.68.195 port 42676 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:46.306037 sshd[3711]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:46.329949 systemd-logind[1305]: New session 18 of user core. Mar 17 18:48:46.340053 systemd[1]: Started session-18.scope. Mar 17 18:48:46.714088 sshd[3711]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:46.727526 systemd[1]: sshd@17-134.199.210.138:22-139.178.68.195:42676.service: Deactivated successfully. Mar 17 18:48:46.735084 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:48:46.737791 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:48:46.741023 systemd-logind[1305]: Removed session 18. Mar 17 18:48:50.105244 kubelet[2206]: E0317 18:48:50.105173 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:48:51.721687 systemd[1]: Started sshd@18-134.199.210.138:22-139.178.68.195:42684.service. Mar 17 18:48:51.790319 sshd[3729]: Accepted publickey for core from 139.178.68.195 port 42684 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:51.793206 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:51.806257 systemd[1]: Started session-19.scope. Mar 17 18:48:51.806703 systemd-logind[1305]: New session 19 of user core. Mar 17 18:48:52.028601 sshd[3729]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:52.039562 systemd[1]: sshd@18-134.199.210.138:22-139.178.68.195:42684.service: Deactivated successfully. Mar 17 18:48:52.043941 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:48:52.044580 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:48:52.046505 systemd-logind[1305]: Removed session 19. Mar 17 18:48:57.038351 systemd[1]: Started sshd@19-134.199.210.138:22-139.178.68.195:41976.service. Mar 17 18:48:57.107841 sshd[3744]: Accepted publickey for core from 139.178.68.195 port 41976 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:48:57.112301 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:48:57.132257 systemd[1]: Started session-20.scope. Mar 17 18:48:57.132931 systemd-logind[1305]: New session 20 of user core. Mar 17 18:48:57.370632 sshd[3744]: pam_unix(sshd:session): session closed for user core Mar 17 18:48:57.378491 systemd[1]: sshd@19-134.199.210.138:22-139.178.68.195:41976.service: Deactivated successfully. Mar 17 18:48:57.380046 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:48:57.383132 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:48:57.386381 systemd-logind[1305]: Removed session 20. Mar 17 18:49:02.377204 systemd[1]: Started sshd@20-134.199.210.138:22-139.178.68.195:41978.service. Mar 17 18:49:02.481856 sshd[3758]: Accepted publickey for core from 139.178.68.195 port 41978 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:02.485819 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:02.504404 systemd[1]: Started session-21.scope. Mar 17 18:49:02.514628 systemd-logind[1305]: New session 21 of user core. Mar 17 18:49:02.844770 sshd[3758]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:02.855348 systemd[1]: sshd@20-134.199.210.138:22-139.178.68.195:41978.service: Deactivated successfully. Mar 17 18:49:02.857121 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:49:02.858398 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:49:02.863729 systemd-logind[1305]: Removed session 21. Mar 17 18:49:07.854922 systemd[1]: Started sshd@21-134.199.210.138:22-139.178.68.195:33540.service. Mar 17 18:49:07.938102 sshd[3771]: Accepted publickey for core from 139.178.68.195 port 33540 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:07.948228 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:07.956838 systemd[1]: Started session-22.scope. Mar 17 18:49:07.958331 systemd-logind[1305]: New session 22 of user core. Mar 17 18:49:08.116892 kubelet[2206]: E0317 18:49:08.105658 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:08.220387 sshd[3771]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:08.227949 systemd[1]: sshd@21-134.199.210.138:22-139.178.68.195:33540.service: Deactivated successfully. Mar 17 18:49:08.230514 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:49:08.240456 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:49:08.251896 systemd-logind[1305]: Removed session 22. Mar 17 18:49:10.104963 kubelet[2206]: E0317 18:49:10.104898 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:13.226316 systemd[1]: Started sshd@22-134.199.210.138:22-139.178.68.195:33548.service. Mar 17 18:49:13.292437 sshd[3786]: Accepted publickey for core from 139.178.68.195 port 33548 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:13.298070 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:13.326893 systemd[1]: Started session-23.scope. Mar 17 18:49:13.327784 systemd-logind[1305]: New session 23 of user core. Mar 17 18:49:13.684482 sshd[3786]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:13.689769 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:49:13.690396 systemd[1]: sshd@22-134.199.210.138:22-139.178.68.195:33548.service: Deactivated successfully. Mar 17 18:49:13.693230 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:49:13.694600 systemd-logind[1305]: Removed session 23. Mar 17 18:49:18.692005 systemd[1]: Started sshd@23-134.199.210.138:22-139.178.68.195:41918.service. Mar 17 18:49:18.760426 sshd[3799]: Accepted publickey for core from 139.178.68.195 port 41918 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:18.766020 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:18.777873 systemd[1]: Started session-24.scope. Mar 17 18:49:18.779275 systemd-logind[1305]: New session 24 of user core. Mar 17 18:49:19.005436 sshd[3799]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:19.011247 systemd[1]: Started sshd@24-134.199.210.138:22-139.178.68.195:41922.service. Mar 17 18:49:19.017368 systemd[1]: sshd@23-134.199.210.138:22-139.178.68.195:41918.service: Deactivated successfully. Mar 17 18:49:19.020173 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:49:19.020463 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:49:19.026967 systemd-logind[1305]: Removed session 24. Mar 17 18:49:19.102514 sshd[3810]: Accepted publickey for core from 139.178.68.195 port 41922 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:19.108480 kubelet[2206]: E0317 18:49:19.108299 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:19.112512 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:19.128875 systemd[1]: Started session-25.scope. Mar 17 18:49:19.131476 systemd-logind[1305]: New session 25 of user core. Mar 17 18:49:20.810766 systemd[1]: run-containerd-runc-k8s.io-6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6-runc.x8ydhm.mount: Deactivated successfully. Mar 17 18:49:20.835674 env[1312]: time="2025-03-17T18:49:20.835580109Z" level=info msg="StopContainer for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" with timeout 30 (s)" Mar 17 18:49:20.851830 env[1312]: time="2025-03-17T18:49:20.851695821Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:49:20.854790 env[1312]: time="2025-03-17T18:49:20.852156991Z" level=info msg="Stop container \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" with signal terminated" Mar 17 18:49:20.865316 env[1312]: time="2025-03-17T18:49:20.865258277Z" level=info msg="StopContainer for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" with timeout 2 (s)" Mar 17 18:49:20.865782 env[1312]: time="2025-03-17T18:49:20.865739862Z" level=info msg="Stop container \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" with signal terminated" Mar 17 18:49:20.896649 systemd-networkd[1080]: lxc_health: Link DOWN Mar 17 18:49:20.896659 systemd-networkd[1080]: lxc_health: Lost carrier Mar 17 18:49:20.963188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f-rootfs.mount: Deactivated successfully. Mar 17 18:49:20.977038 env[1312]: time="2025-03-17T18:49:20.976929856Z" level=info msg="shim disconnected" id=9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f Mar 17 18:49:20.977501 env[1312]: time="2025-03-17T18:49:20.977472080Z" level=warning msg="cleaning up after shim disconnected" id=9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f namespace=k8s.io Mar 17 18:49:20.977667 env[1312]: time="2025-03-17T18:49:20.977643200Z" level=info msg="cleaning up dead shim" Mar 17 18:49:21.006248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6-rootfs.mount: Deactivated successfully. Mar 17 18:49:21.010853 env[1312]: time="2025-03-17T18:49:21.010785285Z" level=info msg="shim disconnected" id=6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6 Mar 17 18:49:21.011650 env[1312]: time="2025-03-17T18:49:21.011589182Z" level=warning msg="cleaning up after shim disconnected" id=6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6 namespace=k8s.io Mar 17 18:49:21.011881 env[1312]: time="2025-03-17T18:49:21.011856892Z" level=info msg="cleaning up dead shim" Mar 17 18:49:21.028670 env[1312]: time="2025-03-17T18:49:21.028599461Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3879 runtime=io.containerd.runc.v2\n" Mar 17 18:49:21.030980 env[1312]: time="2025-03-17T18:49:21.030898034Z" level=info msg="StopContainer for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" returns successfully" Mar 17 18:49:21.032525 env[1312]: time="2025-03-17T18:49:21.032483103Z" level=info msg="StopPodSandbox for \"261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58\"" Mar 17 18:49:21.032810 env[1312]: time="2025-03-17T18:49:21.032767904Z" level=info msg="Container to stop \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:21.040517 env[1312]: time="2025-03-17T18:49:21.040455036Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" Mar 17 18:49:21.046173 env[1312]: time="2025-03-17T18:49:21.046094833Z" level=info msg="StopContainer for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" returns successfully" Mar 17 18:49:21.047496 env[1312]: time="2025-03-17T18:49:21.047293832Z" level=info msg="StopPodSandbox for \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\"" Mar 17 18:49:21.047496 env[1312]: time="2025-03-17T18:49:21.047389600Z" level=info msg="Container to stop \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:21.047496 env[1312]: time="2025-03-17T18:49:21.047412229Z" level=info msg="Container to stop \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:21.047496 env[1312]: time="2025-03-17T18:49:21.047433170Z" level=info msg="Container to stop \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:21.047496 env[1312]: time="2025-03-17T18:49:21.047456893Z" level=info msg="Container to stop \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:21.048705 env[1312]: time="2025-03-17T18:49:21.047475074Z" level=info msg="Container to stop \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:21.105133 env[1312]: time="2025-03-17T18:49:21.102336362Z" level=info msg="shim disconnected" id=261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58 Mar 17 18:49:21.105133 env[1312]: time="2025-03-17T18:49:21.102952434Z" level=warning msg="cleaning up after shim disconnected" id=261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58 namespace=k8s.io Mar 17 18:49:21.105133 env[1312]: time="2025-03-17T18:49:21.102972508Z" level=info msg="cleaning up dead shim" Mar 17 18:49:21.120567 env[1312]: time="2025-03-17T18:49:21.120498148Z" level=info msg="shim disconnected" id=c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee Mar 17 18:49:21.120567 env[1312]: time="2025-03-17T18:49:21.120564590Z" level=warning msg="cleaning up after shim disconnected" id=c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee namespace=k8s.io Mar 17 18:49:21.120567 env[1312]: time="2025-03-17T18:49:21.120578224Z" level=info msg="cleaning up dead shim" Mar 17 18:49:21.133739 env[1312]: time="2025-03-17T18:49:21.133662594Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3949 runtime=io.containerd.runc.v2\n" Mar 17 18:49:21.134236 env[1312]: time="2025-03-17T18:49:21.134165345Z" level=info msg="TearDown network for sandbox \"261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58\" successfully" Mar 17 18:49:21.134355 env[1312]: time="2025-03-17T18:49:21.134224359Z" level=info msg="StopPodSandbox for \"261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58\" returns successfully" Mar 17 18:49:21.153089 env[1312]: time="2025-03-17T18:49:21.148835520Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3958 runtime=io.containerd.runc.v2\n" Mar 17 18:49:21.153089 env[1312]: time="2025-03-17T18:49:21.149343511Z" level=info msg="TearDown network for sandbox \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" successfully" Mar 17 18:49:21.153089 env[1312]: time="2025-03-17T18:49:21.149379388Z" level=info msg="StopPodSandbox for \"c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee\" returns successfully" Mar 17 18:49:21.278809 kubelet[2206]: I0317 18:49:21.278728 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-xtables-lock\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.278809 kubelet[2206]: I0317 18:49:21.278822 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hubble-tls\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.279800 kubelet[2206]: I0317 18:49:21.278845 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-cgroup\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.279800 kubelet[2206]: I0317 18:49:21.278868 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hostproc\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.279800 kubelet[2206]: I0317 18:49:21.278890 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-net\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.279800 kubelet[2206]: I0317 18:49:21.278927 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qjwt\" (UniqueName: \"kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-kube-api-access-2qjwt\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.279800 kubelet[2206]: I0317 18:49:21.278952 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ds45\" (UniqueName: \"kubernetes.io/projected/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-kube-api-access-4ds45\") pod \"050d1bc5-ff7b-4af6-92fb-9512ec9e97d1\" (UID: \"050d1bc5-ff7b-4af6-92fb-9512ec9e97d1\") " Mar 17 18:49:21.279800 kubelet[2206]: I0317 18:49:21.278979 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea4895a4-e76a-4f76-a116-9b702eae9ff2-clustermesh-secrets\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280162 kubelet[2206]: I0317 18:49:21.279002 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-bpf-maps\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280162 kubelet[2206]: I0317 18:49:21.279032 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-cilium-config-path\") pod \"050d1bc5-ff7b-4af6-92fb-9512ec9e97d1\" (UID: \"050d1bc5-ff7b-4af6-92fb-9512ec9e97d1\") " Mar 17 18:49:21.280162 kubelet[2206]: I0317 18:49:21.279060 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-config-path\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280162 kubelet[2206]: I0317 18:49:21.279081 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-kernel\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280162 kubelet[2206]: I0317 18:49:21.279101 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cni-path\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280162 kubelet[2206]: I0317 18:49:21.279121 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-lib-modules\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280506 kubelet[2206]: I0317 18:49:21.279143 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-etc-cni-netd\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.280506 kubelet[2206]: I0317 18:49:21.279167 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-run\") pod \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\" (UID: \"ea4895a4-e76a-4f76-a116-9b702eae9ff2\") " Mar 17 18:49:21.287604 kubelet[2206]: I0317 18:49:21.287513 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.292246 kubelet[2206]: I0317 18:49:21.292055 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "050d1bc5-ff7b-4af6-92fb-9512ec9e97d1" (UID: "050d1bc5-ff7b-4af6-92fb-9512ec9e97d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:49:21.295887 kubelet[2206]: I0317 18:49:21.295752 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:49:21.296452 kubelet[2206]: I0317 18:49:21.296326 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.301476 kubelet[2206]: I0317 18:49:21.301405 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.301844 kubelet[2206]: I0317 18:49:21.301809 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cni-path" (OuterVolumeSpecName: "cni-path") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.302364 kubelet[2206]: I0317 18:49:21.302328 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.302548 kubelet[2206]: I0317 18:49:21.302525 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.302674 kubelet[2206]: I0317 18:49:21.279916 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.303826 kubelet[2206]: I0317 18:49:21.303780 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea4895a4-e76a-4f76-a116-9b702eae9ff2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:49:21.304093 kubelet[2206]: I0317 18:49:21.304062 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.304282 kubelet[2206]: I0317 18:49:21.304248 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.304429 kubelet[2206]: I0317 18:49:21.304404 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hostproc" (OuterVolumeSpecName: "hostproc") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:21.306366 kubelet[2206]: I0317 18:49:21.306125 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:49:21.311802 kubelet[2206]: I0317 18:49:21.311738 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-kube-api-access-2qjwt" (OuterVolumeSpecName: "kube-api-access-2qjwt") pod "ea4895a4-e76a-4f76-a116-9b702eae9ff2" (UID: "ea4895a4-e76a-4f76-a116-9b702eae9ff2"). InnerVolumeSpecName "kube-api-access-2qjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:49:21.312956 kubelet[2206]: I0317 18:49:21.312884 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-kube-api-access-4ds45" (OuterVolumeSpecName: "kube-api-access-4ds45") pod "050d1bc5-ff7b-4af6-92fb-9512ec9e97d1" (UID: "050d1bc5-ff7b-4af6-92fb-9512ec9e97d1"). InnerVolumeSpecName "kube-api-access-4ds45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:49:21.378128 kubelet[2206]: E0317 18:49:21.377903 2206 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:49:21.382201 kubelet[2206]: I0317 18:49:21.382141 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-config-path\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382255 2206 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-kernel\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382277 2206 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cni-path\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382308 2206 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-lib-modules\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382335 2206 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-etc-cni-netd\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382351 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-run\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382365 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-cilium-cgroup\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382379 2206 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-xtables-lock\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.382470 kubelet[2206]: I0317 18:49:21.382392 2206 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hubble-tls\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382406 2206 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-hostproc\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382421 2206 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-host-proc-sys-net\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382434 2206 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2qjwt\" (UniqueName: \"kubernetes.io/projected/ea4895a4-e76a-4f76-a116-9b702eae9ff2-kube-api-access-2qjwt\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382517 2206 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4ds45\" (UniqueName: \"kubernetes.io/projected/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-kube-api-access-4ds45\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382549 2206 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea4895a4-e76a-4f76-a116-9b702eae9ff2-clustermesh-secrets\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382562 2206 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea4895a4-e76a-4f76-a116-9b702eae9ff2-bpf-maps\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.383062 kubelet[2206]: I0317 18:49:21.382577 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1-cilium-config-path\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:21.790649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58-rootfs.mount: Deactivated successfully. Mar 17 18:49:21.790880 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-261dc09ac16ec741d0b68fbe30a1a93e85b724d2f60e8adc377a1c8f4765ea58-shm.mount: Deactivated successfully. Mar 17 18:49:21.791023 systemd[1]: var-lib-kubelet-pods-050d1bc5\x2dff7b\x2d4af6\x2d92fb\x2d9512ec9e97d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4ds45.mount: Deactivated successfully. Mar 17 18:49:21.791135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee-rootfs.mount: Deactivated successfully. Mar 17 18:49:21.791254 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c36786fb102c17126e2610fcd80594b9e16f31070808cefd9a713f9acf53efee-shm.mount: Deactivated successfully. Mar 17 18:49:21.791650 systemd[1]: var-lib-kubelet-pods-ea4895a4\x2de76a\x2d4f76\x2da116\x2d9b702eae9ff2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2qjwt.mount: Deactivated successfully. Mar 17 18:49:21.791859 systemd[1]: var-lib-kubelet-pods-ea4895a4\x2de76a\x2d4f76\x2da116\x2d9b702eae9ff2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:49:21.792011 systemd[1]: var-lib-kubelet-pods-ea4895a4\x2de76a\x2d4f76\x2da116\x2d9b702eae9ff2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:49:22.010422 kubelet[2206]: I0317 18:49:22.010370 2206 scope.go:117] "RemoveContainer" containerID="9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f" Mar 17 18:49:22.015471 env[1312]: time="2025-03-17T18:49:22.015250945Z" level=info msg="RemoveContainer for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\"" Mar 17 18:49:22.022797 env[1312]: time="2025-03-17T18:49:22.022487232Z" level=info msg="RemoveContainer for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" returns successfully" Mar 17 18:49:22.024774 kubelet[2206]: I0317 18:49:22.023716 2206 scope.go:117] "RemoveContainer" containerID="9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f" Mar 17 18:49:22.027888 env[1312]: time="2025-03-17T18:49:22.027510822Z" level=error msg="ContainerStatus for \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\": not found" Mar 17 18:49:22.039539 kubelet[2206]: E0317 18:49:22.039460 2206 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\": not found" containerID="9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f" Mar 17 18:49:22.039777 kubelet[2206]: I0317 18:49:22.039549 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f"} err="failed to get container status \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ae7a04e33fd68327818d922304a2f9a300738205df723a01f8b0ad75677ff9f\": not found" Mar 17 18:49:22.040527 kubelet[2206]: I0317 18:49:22.040493 2206 scope.go:117] "RemoveContainer" containerID="6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6" Mar 17 18:49:22.059195 env[1312]: time="2025-03-17T18:49:22.057034902Z" level=info msg="RemoveContainer for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\"" Mar 17 18:49:22.065627 env[1312]: time="2025-03-17T18:49:22.064133550Z" level=info msg="RemoveContainer for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" returns successfully" Mar 17 18:49:22.068646 kubelet[2206]: I0317 18:49:22.068596 2206 scope.go:117] "RemoveContainer" containerID="8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9" Mar 17 18:49:22.070901 env[1312]: time="2025-03-17T18:49:22.070855252Z" level=info msg="RemoveContainer for \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\"" Mar 17 18:49:22.087082 env[1312]: time="2025-03-17T18:49:22.087009480Z" level=info msg="RemoveContainer for \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\" returns successfully" Mar 17 18:49:22.088568 kubelet[2206]: I0317 18:49:22.088520 2206 scope.go:117] "RemoveContainer" containerID="c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1" Mar 17 18:49:22.091100 env[1312]: time="2025-03-17T18:49:22.091056119Z" level=info msg="RemoveContainer for \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\"" Mar 17 18:49:22.098988 env[1312]: time="2025-03-17T18:49:22.098919455Z" level=info msg="RemoveContainer for \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\" returns successfully" Mar 17 18:49:22.099848 kubelet[2206]: I0317 18:49:22.099809 2206 scope.go:117] "RemoveContainer" containerID="bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca" Mar 17 18:49:22.102594 env[1312]: time="2025-03-17T18:49:22.102537978Z" level=info msg="RemoveContainer for \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\"" Mar 17 18:49:22.106338 env[1312]: time="2025-03-17T18:49:22.106276123Z" level=info msg="RemoveContainer for \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\" returns successfully" Mar 17 18:49:22.107015 kubelet[2206]: I0317 18:49:22.106859 2206 scope.go:117] "RemoveContainer" containerID="92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557" Mar 17 18:49:22.109154 env[1312]: time="2025-03-17T18:49:22.109096620Z" level=info msg="RemoveContainer for \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\"" Mar 17 18:49:22.113630 env[1312]: time="2025-03-17T18:49:22.113544396Z" level=info msg="RemoveContainer for \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\" returns successfully" Mar 17 18:49:22.114373 kubelet[2206]: I0317 18:49:22.114153 2206 scope.go:117] "RemoveContainer" containerID="6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6" Mar 17 18:49:22.114672 env[1312]: time="2025-03-17T18:49:22.114561477Z" level=error msg="ContainerStatus for \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\": not found" Mar 17 18:49:22.115095 kubelet[2206]: E0317 18:49:22.114894 2206 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\": not found" containerID="6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6" Mar 17 18:49:22.115095 kubelet[2206]: I0317 18:49:22.114930 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6"} err="failed to get container status \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e8c69f8d1ab3f5174f2d0d7d3c588907c35ecf9cb47e15b4e94c457313f51b6\": not found" Mar 17 18:49:22.115095 kubelet[2206]: I0317 18:49:22.114977 2206 scope.go:117] "RemoveContainer" containerID="8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9" Mar 17 18:49:22.116096 env[1312]: time="2025-03-17T18:49:22.116030324Z" level=error msg="ContainerStatus for \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\": not found" Mar 17 18:49:22.116421 kubelet[2206]: E0317 18:49:22.116377 2206 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\": not found" containerID="8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9" Mar 17 18:49:22.116500 kubelet[2206]: I0317 18:49:22.116422 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9"} err="failed to get container status \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b35bbbc2e5971f1ecea34452a3d4b2e299dbc0ae92c69d06d578a26fa0090c9\": not found" Mar 17 18:49:22.116500 kubelet[2206]: I0317 18:49:22.116457 2206 scope.go:117] "RemoveContainer" containerID="c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1" Mar 17 18:49:22.116850 env[1312]: time="2025-03-17T18:49:22.116782600Z" level=error msg="ContainerStatus for \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\": not found" Mar 17 18:49:22.117513 kubelet[2206]: E0317 18:49:22.117345 2206 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\": not found" containerID="c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1" Mar 17 18:49:22.117513 kubelet[2206]: I0317 18:49:22.117380 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1"} err="failed to get container status \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c165479e20ce2efb1df16bdb2e7d11c8862676e7cf2abb33e597a37b5f331bb1\": not found" Mar 17 18:49:22.117513 kubelet[2206]: I0317 18:49:22.117422 2206 scope.go:117] "RemoveContainer" containerID="bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca" Mar 17 18:49:22.117979 env[1312]: time="2025-03-17T18:49:22.117911518Z" level=error msg="ContainerStatus for \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\": not found" Mar 17 18:49:22.118396 kubelet[2206]: E0317 18:49:22.118199 2206 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\": not found" containerID="bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca" Mar 17 18:49:22.118396 kubelet[2206]: I0317 18:49:22.118275 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca"} err="failed to get container status \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"bacb41b70478051fb98f5d51f08b0468d2734cd3bf3debd9c312c07a25b699ca\": not found" Mar 17 18:49:22.118396 kubelet[2206]: I0317 18:49:22.118314 2206 scope.go:117] "RemoveContainer" containerID="92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557" Mar 17 18:49:22.118769 env[1312]: time="2025-03-17T18:49:22.118706552Z" level=error msg="ContainerStatus for \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\": not found" Mar 17 18:49:22.119118 kubelet[2206]: E0317 18:49:22.119044 2206 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\": not found" containerID="92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557" Mar 17 18:49:22.119118 kubelet[2206]: I0317 18:49:22.119076 2206 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557"} err="failed to get container status \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\": rpc error: code = NotFound desc = an error occurred when try to find container \"92897fcf5d5791c19d4c00c9b1322531f3d4668d26d845999861a6b6041e1557\": not found" Mar 17 18:49:22.676272 sshd[3810]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:22.684321 systemd[1]: Started sshd@25-134.199.210.138:22-139.178.68.195:41926.service. Mar 17 18:49:22.685700 systemd[1]: sshd@24-134.199.210.138:22-139.178.68.195:41922.service: Deactivated successfully. Mar 17 18:49:22.687457 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:49:22.691548 systemd-logind[1305]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:49:22.694360 systemd-logind[1305]: Removed session 25. Mar 17 18:49:22.762724 sshd[3983]: Accepted publickey for core from 139.178.68.195 port 41926 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:22.766063 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:22.775202 systemd-logind[1305]: New session 26 of user core. Mar 17 18:49:22.776379 systemd[1]: Started session-26.scope. Mar 17 18:49:23.109658 kubelet[2206]: I0317 18:49:23.109541 2206 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050d1bc5-ff7b-4af6-92fb-9512ec9e97d1" path="/var/lib/kubelet/pods/050d1bc5-ff7b-4af6-92fb-9512ec9e97d1/volumes" Mar 17 18:49:23.110701 kubelet[2206]: I0317 18:49:23.110651 2206 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" path="/var/lib/kubelet/pods/ea4895a4-e76a-4f76-a116-9b702eae9ff2/volumes" Mar 17 18:49:23.522394 kubelet[2206]: I0317 18:49:23.522319 2206 setters.go:580] "Node became not ready" node="ci-3510.3.7-0-797a2fde87" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:49:23Z","lastTransitionTime":"2025-03-17T18:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:49:23.562946 sshd[3983]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:23.566681 systemd[1]: Started sshd@26-134.199.210.138:22-139.178.68.195:41936.service. Mar 17 18:49:23.572924 systemd[1]: sshd@25-134.199.210.138:22-139.178.68.195:41926.service: Deactivated successfully. Mar 17 18:49:23.574330 systemd-logind[1305]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:49:23.574514 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:49:23.585477 systemd-logind[1305]: Removed session 26. Mar 17 18:49:23.652342 kubelet[2206]: I0317 18:49:23.652273 2206 topology_manager.go:215] "Topology Admit Handler" podUID="b699943e-a81c-452b-a977-492263c40194" podNamespace="kube-system" podName="cilium-w9sl6" Mar 17 18:49:23.652610 kubelet[2206]: E0317 18:49:23.652379 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" containerName="mount-cgroup" Mar 17 18:49:23.652610 kubelet[2206]: E0317 18:49:23.652401 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" containerName="apply-sysctl-overwrites" Mar 17 18:49:23.652610 kubelet[2206]: E0317 18:49:23.652411 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" containerName="mount-bpf-fs" Mar 17 18:49:23.652610 kubelet[2206]: E0317 18:49:23.652420 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="050d1bc5-ff7b-4af6-92fb-9512ec9e97d1" containerName="cilium-operator" Mar 17 18:49:23.652610 kubelet[2206]: E0317 18:49:23.652429 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" containerName="cilium-agent" Mar 17 18:49:23.652610 kubelet[2206]: E0317 18:49:23.652440 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" containerName="clean-cilium-state" Mar 17 18:49:23.652610 kubelet[2206]: I0317 18:49:23.652486 2206 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4895a4-e76a-4f76-a116-9b702eae9ff2" containerName="cilium-agent" Mar 17 18:49:23.652610 kubelet[2206]: I0317 18:49:23.652496 2206 memory_manager.go:354] "RemoveStaleState removing state" podUID="050d1bc5-ff7b-4af6-92fb-9512ec9e97d1" containerName="cilium-operator" Mar 17 18:49:23.666272 sshd[3996]: Accepted publickey for core from 139.178.68.195 port 41936 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:23.669844 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:23.684651 systemd-logind[1305]: New session 27 of user core. Mar 17 18:49:23.685468 systemd[1]: Started session-27.scope. Mar 17 18:49:23.801126 kubelet[2206]: I0317 18:49:23.800941 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-net\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.801610 kubelet[2206]: I0317 18:49:23.801553 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-kernel\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.801892 kubelet[2206]: I0317 18:49:23.801849 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-xtables-lock\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.802072 kubelet[2206]: I0317 18:49:23.802054 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b699943e-a81c-452b-a977-492263c40194-cilium-config-path\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.802257 kubelet[2206]: I0317 18:49:23.802208 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-hostproc\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.802423 kubelet[2206]: I0317 18:49:23.802402 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-cilium-ipsec-secrets\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.802552 kubelet[2206]: I0317 18:49:23.802531 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glccj\" (UniqueName: \"kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-kube-api-access-glccj\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.802709 kubelet[2206]: I0317 18:49:23.802689 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-etc-cni-netd\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.802874 kubelet[2206]: I0317 18:49:23.802850 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-run\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.803056 kubelet[2206]: I0317 18:49:23.803028 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-clustermesh-secrets\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.803185 kubelet[2206]: I0317 18:49:23.803160 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-hubble-tls\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.803595 kubelet[2206]: I0317 18:49:23.803549 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-bpf-maps\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.803719 kubelet[2206]: I0317 18:49:23.803699 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-lib-modules\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.803928 kubelet[2206]: I0317 18:49:23.803910 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-cgroup\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.804079 kubelet[2206]: I0317 18:49:23.804058 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cni-path\") pod \"cilium-w9sl6\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " pod="kube-system/cilium-w9sl6" Mar 17 18:49:23.978615 sshd[3996]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:24.000552 systemd[1]: Started sshd@27-134.199.210.138:22-139.178.68.195:41946.service. Mar 17 18:49:24.002167 systemd[1]: sshd@26-134.199.210.138:22-139.178.68.195:41936.service: Deactivated successfully. Mar 17 18:49:24.007754 kubelet[2206]: E0317 18:49:24.007685 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:24.009857 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:49:24.012666 env[1312]: time="2025-03-17T18:49:24.012546181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9sl6,Uid:b699943e-a81c-452b-a977-492263c40194,Namespace:kube-system,Attempt:0,}" Mar 17 18:49:24.017165 systemd-logind[1305]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:49:24.019822 systemd-logind[1305]: Removed session 27. Mar 17 18:49:24.098053 env[1312]: time="2025-03-17T18:49:24.097765453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:49:24.098053 env[1312]: time="2025-03-17T18:49:24.097839786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:49:24.098053 env[1312]: time="2025-03-17T18:49:24.097857830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:49:24.099138 env[1312]: time="2025-03-17T18:49:24.099036809Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065 pid=4024 runtime=io.containerd.runc.v2 Mar 17 18:49:24.135638 sshd[4014]: Accepted publickey for core from 139.178.68.195 port 41946 ssh2: RSA SHA256:8go6IuoP0Sh4aBcdoE6kISxrWrUSPJh2gvf/N4TtaPg Mar 17 18:49:24.134540 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:49:24.146432 systemd-logind[1305]: New session 28 of user core. Mar 17 18:49:24.147301 systemd[1]: Started session-28.scope. Mar 17 18:49:24.223872 env[1312]: time="2025-03-17T18:49:24.223774125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9sl6,Uid:b699943e-a81c-452b-a977-492263c40194,Namespace:kube-system,Attempt:0,} returns sandbox id \"31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065\"" Mar 17 18:49:24.225609 kubelet[2206]: E0317 18:49:24.225254 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:24.234318 env[1312]: time="2025-03-17T18:49:24.234243562Z" level=info msg="CreateContainer within sandbox \"31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:49:24.262333 env[1312]: time="2025-03-17T18:49:24.262248028Z" level=info msg="CreateContainer within sandbox \"31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f\"" Mar 17 18:49:24.268458 env[1312]: time="2025-03-17T18:49:24.268051239Z" level=info msg="StartContainer for \"10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f\"" Mar 17 18:49:24.372733 env[1312]: time="2025-03-17T18:49:24.372119757Z" level=info msg="StartContainer for \"10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f\" returns successfully" Mar 17 18:49:24.447176 env[1312]: time="2025-03-17T18:49:24.447111014Z" level=info msg="shim disconnected" id=10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f Mar 17 18:49:24.448024 env[1312]: time="2025-03-17T18:49:24.447976522Z" level=warning msg="cleaning up after shim disconnected" id=10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f namespace=k8s.io Mar 17 18:49:24.448205 env[1312]: time="2025-03-17T18:49:24.448182467Z" level=info msg="cleaning up dead shim" Mar 17 18:49:24.489453 env[1312]: time="2025-03-17T18:49:24.488772028Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4114 runtime=io.containerd.runc.v2\n" Mar 17 18:49:25.064951 env[1312]: time="2025-03-17T18:49:25.064900275Z" level=info msg="StopPodSandbox for \"31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065\"" Mar 17 18:49:25.065774 env[1312]: time="2025-03-17T18:49:25.065731522Z" level=info msg="Container to stop \"10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:49:25.069552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065-shm.mount: Deactivated successfully. Mar 17 18:49:25.162944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065-rootfs.mount: Deactivated successfully. Mar 17 18:49:25.172300 env[1312]: time="2025-03-17T18:49:25.172233371Z" level=info msg="shim disconnected" id=31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065 Mar 17 18:49:25.172661 env[1312]: time="2025-03-17T18:49:25.172628906Z" level=warning msg="cleaning up after shim disconnected" id=31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065 namespace=k8s.io Mar 17 18:49:25.172829 env[1312]: time="2025-03-17T18:49:25.172804967Z" level=info msg="cleaning up dead shim" Mar 17 18:49:25.215832 env[1312]: time="2025-03-17T18:49:25.215766938Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4147 runtime=io.containerd.runc.v2\n" Mar 17 18:49:25.216626 env[1312]: time="2025-03-17T18:49:25.216573524Z" level=info msg="TearDown network for sandbox \"31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065\" successfully" Mar 17 18:49:25.216859 env[1312]: time="2025-03-17T18:49:25.216816152Z" level=info msg="StopPodSandbox for \"31389f310450a786414f84a93fbce0a38e284cb4b4cbe6f5534d5abbae10c065\" returns successfully" Mar 17 18:49:25.353471 kubelet[2206]: I0317 18:49:25.353252 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-bpf-maps\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.353471 kubelet[2206]: I0317 18:49:25.353321 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cni-path\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.353471 kubelet[2206]: I0317 18:49:25.353348 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-hostproc\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.353471 kubelet[2206]: I0317 18:49:25.353373 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-kernel\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.353471 kubelet[2206]: I0317 18:49:25.353395 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-run\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.353471 kubelet[2206]: I0317 18:49:25.353419 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-hubble-tls\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355014 kubelet[2206]: I0317 18:49:25.353434 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-xtables-lock\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355014 kubelet[2206]: I0317 18:49:25.353449 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-lib-modules\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355014 kubelet[2206]: I0317 18:49:25.353465 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-cgroup\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355014 kubelet[2206]: I0317 18:49:25.353484 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-etc-cni-netd\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355014 kubelet[2206]: I0317 18:49:25.353505 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-clustermesh-secrets\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355014 kubelet[2206]: I0317 18:49:25.353523 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-cilium-ipsec-secrets\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355547 kubelet[2206]: I0317 18:49:25.353541 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glccj\" (UniqueName: \"kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-kube-api-access-glccj\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355547 kubelet[2206]: I0317 18:49:25.353561 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-net\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355547 kubelet[2206]: I0317 18:49:25.353598 2206 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b699943e-a81c-452b-a977-492263c40194-cilium-config-path\") pod \"b699943e-a81c-452b-a977-492263c40194\" (UID: \"b699943e-a81c-452b-a977-492263c40194\") " Mar 17 18:49:25.355547 kubelet[2206]: I0317 18:49:25.354474 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.355547 kubelet[2206]: I0317 18:49:25.354562 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.355895 kubelet[2206]: I0317 18:49:25.355423 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.355895 kubelet[2206]: I0317 18:49:25.355488 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.355895 kubelet[2206]: I0317 18:49:25.355514 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.357768 kubelet[2206]: I0317 18:49:25.354592 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cni-path" (OuterVolumeSpecName: "cni-path") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.361508 kubelet[2206]: I0317 18:49:25.358012 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-hostproc" (OuterVolumeSpecName: "hostproc") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.361508 kubelet[2206]: I0317 18:49:25.358044 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.361508 kubelet[2206]: I0317 18:49:25.358067 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.361508 kubelet[2206]: I0317 18:49:25.361293 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b699943e-a81c-452b-a977-492263c40194-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:49:25.361971 kubelet[2206]: I0317 18:49:25.361652 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:49:25.367710 systemd[1]: var-lib-kubelet-pods-b699943e\x2da81c\x2d452b\x2da977\x2d492263c40194-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:49:25.375412 systemd[1]: var-lib-kubelet-pods-b699943e\x2da81c\x2d452b\x2da977\x2d492263c40194-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:49:25.377410 kubelet[2206]: I0317 18:49:25.377366 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:49:25.377670 kubelet[2206]: I0317 18:49:25.377570 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:49:25.380384 kubelet[2206]: I0317 18:49:25.380334 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:49:25.382643 kubelet[2206]: I0317 18:49:25.382576 2206 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-kube-api-access-glccj" (OuterVolumeSpecName: "kube-api-access-glccj") pod "b699943e-a81c-452b-a977-492263c40194" (UID: "b699943e-a81c-452b-a977-492263c40194"). InnerVolumeSpecName "kube-api-access-glccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:49:25.455022 kubelet[2206]: I0317 18:49:25.454954 2206 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-clustermesh-secrets\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455022 kubelet[2206]: I0317 18:49:25.455015 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b699943e-a81c-452b-a977-492263c40194-cilium-ipsec-secrets\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455022 kubelet[2206]: I0317 18:49:25.455033 2206 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-glccj\" (UniqueName: \"kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-kube-api-access-glccj\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455055 2206 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-etc-cni-netd\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455071 2206 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-net\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455085 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b699943e-a81c-452b-a977-492263c40194-cilium-config-path\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455098 2206 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-bpf-maps\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455113 2206 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cni-path\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455124 2206 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-hostproc\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455140 2206 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-host-proc-sys-kernel\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455601 kubelet[2206]: I0317 18:49:25.455157 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-run\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455925 kubelet[2206]: I0317 18:49:25.455171 2206 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b699943e-a81c-452b-a977-492263c40194-hubble-tls\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455925 kubelet[2206]: I0317 18:49:25.455185 2206 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-xtables-lock\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455925 kubelet[2206]: I0317 18:49:25.455198 2206 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-lib-modules\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.455925 kubelet[2206]: I0317 18:49:25.455470 2206 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b699943e-a81c-452b-a977-492263c40194-cilium-cgroup\") on node \"ci-3510.3.7-0-797a2fde87\" DevicePath \"\"" Mar 17 18:49:25.927067 systemd[1]: var-lib-kubelet-pods-b699943e\x2da81c\x2d452b\x2da977\x2d492263c40194-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dglccj.mount: Deactivated successfully. Mar 17 18:49:25.927615 systemd[1]: var-lib-kubelet-pods-b699943e\x2da81c\x2d452b\x2da977\x2d492263c40194-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:49:26.078393 kubelet[2206]: I0317 18:49:26.077394 2206 scope.go:117] "RemoveContainer" containerID="10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f" Mar 17 18:49:26.094774 env[1312]: time="2025-03-17T18:49:26.092359049Z" level=info msg="RemoveContainer for \"10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f\"" Mar 17 18:49:26.112239 env[1312]: time="2025-03-17T18:49:26.112066591Z" level=info msg="RemoveContainer for \"10242995295e4b51d9ea5bd69f7e181836654f8dd5aa570a8a082e6bd194a28f\" returns successfully" Mar 17 18:49:26.199287 kubelet[2206]: I0317 18:49:26.199208 2206 topology_manager.go:215] "Topology Admit Handler" podUID="fc87b366-4ae3-4b39-889b-f739514d0cc6" podNamespace="kube-system" podName="cilium-lsq79" Mar 17 18:49:26.199633 kubelet[2206]: E0317 18:49:26.199611 2206 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b699943e-a81c-452b-a977-492263c40194" containerName="mount-cgroup" Mar 17 18:49:26.199784 kubelet[2206]: I0317 18:49:26.199766 2206 memory_manager.go:354] "RemoveStaleState removing state" podUID="b699943e-a81c-452b-a977-492263c40194" containerName="mount-cgroup" Mar 17 18:49:26.368088 kubelet[2206]: I0317 18:49:26.368026 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-cilium-run\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.368880 kubelet[2206]: I0317 18:49:26.368839 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-bpf-maps\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.369086 kubelet[2206]: I0317 18:49:26.369060 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-host-proc-sys-net\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.369306 kubelet[2206]: I0317 18:49:26.369281 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc87b366-4ae3-4b39-889b-f739514d0cc6-hubble-tls\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.369518 kubelet[2206]: I0317 18:49:26.369494 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc87b366-4ae3-4b39-889b-f739514d0cc6-clustermesh-secrets\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.369710 kubelet[2206]: I0317 18:49:26.369681 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc87b366-4ae3-4b39-889b-f739514d0cc6-cilium-config-path\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.369868 kubelet[2206]: I0317 18:49:26.369844 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-host-proc-sys-kernel\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.370034 kubelet[2206]: I0317 18:49:26.370011 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-cni-path\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.370594 kubelet[2206]: I0317 18:49:26.370552 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-etc-cni-netd\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.370756 kubelet[2206]: I0317 18:49:26.370739 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-cilium-cgroup\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.370868 kubelet[2206]: I0317 18:49:26.370854 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-lib-modules\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.370979 kubelet[2206]: I0317 18:49:26.370957 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-xtables-lock\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.371134 kubelet[2206]: I0317 18:49:26.371111 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc87b366-4ae3-4b39-889b-f739514d0cc6-cilium-ipsec-secrets\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.371551 kubelet[2206]: I0317 18:49:26.371532 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc87b366-4ae3-4b39-889b-f739514d0cc6-hostproc\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.371729 kubelet[2206]: I0317 18:49:26.371700 2206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7qw\" (UniqueName: \"kubernetes.io/projected/fc87b366-4ae3-4b39-889b-f739514d0cc6-kube-api-access-ms7qw\") pod \"cilium-lsq79\" (UID: \"fc87b366-4ae3-4b39-889b-f739514d0cc6\") " pod="kube-system/cilium-lsq79" Mar 17 18:49:26.381950 kubelet[2206]: E0317 18:49:26.379983 2206 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:49:26.804410 kubelet[2206]: E0317 18:49:26.804351 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:26.805770 env[1312]: time="2025-03-17T18:49:26.805187687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsq79,Uid:fc87b366-4ae3-4b39-889b-f739514d0cc6,Namespace:kube-system,Attempt:0,}" Mar 17 18:49:26.830398 env[1312]: time="2025-03-17T18:49:26.830300179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:49:26.830657 env[1312]: time="2025-03-17T18:49:26.830624380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:49:26.830801 env[1312]: time="2025-03-17T18:49:26.830765346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:49:26.832053 env[1312]: time="2025-03-17T18:49:26.831960086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739 pid=4175 runtime=io.containerd.runc.v2 Mar 17 18:49:26.894202 env[1312]: time="2025-03-17T18:49:26.894121532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsq79,Uid:fc87b366-4ae3-4b39-889b-f739514d0cc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\"" Mar 17 18:49:26.895875 kubelet[2206]: E0317 18:49:26.895742 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:26.906115 env[1312]: time="2025-03-17T18:49:26.906029663Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:49:26.921303 env[1312]: time="2025-03-17T18:49:26.921228242Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bf8e6d64bf783bfe5a7d035dcd439d742f7ad744ad15144b240a49d9897dcc05\"" Mar 17 18:49:26.934991 env[1312]: time="2025-03-17T18:49:26.933608267Z" level=info msg="StartContainer for \"bf8e6d64bf783bfe5a7d035dcd439d742f7ad744ad15144b240a49d9897dcc05\"" Mar 17 18:49:27.035258 env[1312]: time="2025-03-17T18:49:27.031545394Z" level=info msg="StartContainer for \"bf8e6d64bf783bfe5a7d035dcd439d742f7ad744ad15144b240a49d9897dcc05\" returns successfully" Mar 17 18:49:27.077143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf8e6d64bf783bfe5a7d035dcd439d742f7ad744ad15144b240a49d9897dcc05-rootfs.mount: Deactivated successfully. Mar 17 18:49:27.085793 kubelet[2206]: E0317 18:49:27.085250 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:27.086633 env[1312]: time="2025-03-17T18:49:27.086584192Z" level=info msg="shim disconnected" id=bf8e6d64bf783bfe5a7d035dcd439d742f7ad744ad15144b240a49d9897dcc05 Mar 17 18:49:27.088372 env[1312]: time="2025-03-17T18:49:27.088308865Z" level=warning msg="cleaning up after shim disconnected" id=bf8e6d64bf783bfe5a7d035dcd439d742f7ad744ad15144b240a49d9897dcc05 namespace=k8s.io Mar 17 18:49:27.088702 env[1312]: time="2025-03-17T18:49:27.088664928Z" level=info msg="cleaning up dead shim" Mar 17 18:49:27.107535 kubelet[2206]: I0317 18:49:27.107482 2206 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b699943e-a81c-452b-a977-492263c40194" path="/var/lib/kubelet/pods/b699943e-a81c-452b-a977-492263c40194/volumes" Mar 17 18:49:27.110390 env[1312]: time="2025-03-17T18:49:27.110338155Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4260 runtime=io.containerd.runc.v2\n" Mar 17 18:49:28.115766 kubelet[2206]: E0317 18:49:28.115720 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:28.121704 env[1312]: time="2025-03-17T18:49:28.120159293Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:49:28.172754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575404343.mount: Deactivated successfully. Mar 17 18:49:28.200318 env[1312]: time="2025-03-17T18:49:28.200019789Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f9f33d3c085a24594a2a7437ba83bb615380483c88999ea4177e5fabee157f34\"" Mar 17 18:49:28.201190 env[1312]: time="2025-03-17T18:49:28.201142647Z" level=info msg="StartContainer for \"f9f33d3c085a24594a2a7437ba83bb615380483c88999ea4177e5fabee157f34\"" Mar 17 18:49:28.320901 env[1312]: time="2025-03-17T18:49:28.320585747Z" level=info msg="StartContainer for \"f9f33d3c085a24594a2a7437ba83bb615380483c88999ea4177e5fabee157f34\" returns successfully" Mar 17 18:49:28.377492 env[1312]: time="2025-03-17T18:49:28.377059679Z" level=info msg="shim disconnected" id=f9f33d3c085a24594a2a7437ba83bb615380483c88999ea4177e5fabee157f34 Mar 17 18:49:28.377931 env[1312]: time="2025-03-17T18:49:28.377849475Z" level=warning msg="cleaning up after shim disconnected" id=f9f33d3c085a24594a2a7437ba83bb615380483c88999ea4177e5fabee157f34 namespace=k8s.io Mar 17 18:49:28.378104 env[1312]: time="2025-03-17T18:49:28.378077600Z" level=info msg="cleaning up dead shim" Mar 17 18:49:28.396367 env[1312]: time="2025-03-17T18:49:28.396294259Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4323 runtime=io.containerd.runc.v2\n" Mar 17 18:49:29.121697 kubelet[2206]: E0317 18:49:29.120914 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:29.127004 env[1312]: time="2025-03-17T18:49:29.126945053Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:49:29.156912 env[1312]: time="2025-03-17T18:49:29.156845055Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045\"" Mar 17 18:49:29.158515 env[1312]: time="2025-03-17T18:49:29.158477425Z" level=info msg="StartContainer for \"a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045\"" Mar 17 18:49:29.160751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9f33d3c085a24594a2a7437ba83bb615380483c88999ea4177e5fabee157f34-rootfs.mount: Deactivated successfully. Mar 17 18:49:29.238015 systemd[1]: run-containerd-runc-k8s.io-a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045-runc.f1CF7w.mount: Deactivated successfully. Mar 17 18:49:29.413691 env[1312]: time="2025-03-17T18:49:29.413489323Z" level=info msg="StartContainer for \"a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045\" returns successfully" Mar 17 18:49:29.446679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045-rootfs.mount: Deactivated successfully. Mar 17 18:49:29.455635 env[1312]: time="2025-03-17T18:49:29.455512663Z" level=info msg="shim disconnected" id=a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045 Mar 17 18:49:29.455635 env[1312]: time="2025-03-17T18:49:29.455601556Z" level=warning msg="cleaning up after shim disconnected" id=a331bf5785eeea67ddf5f2d017523c14d4c3e79dc171e62cc63c1657bb78a045 namespace=k8s.io Mar 17 18:49:29.455635 env[1312]: time="2025-03-17T18:49:29.455617413Z" level=info msg="cleaning up dead shim" Mar 17 18:49:29.477665 env[1312]: time="2025-03-17T18:49:29.477594616Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4383 runtime=io.containerd.runc.v2\n" Mar 17 18:49:30.140326 kubelet[2206]: E0317 18:49:30.137823 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:30.150144 env[1312]: time="2025-03-17T18:49:30.144602079Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:49:30.206205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184895152.mount: Deactivated successfully. Mar 17 18:49:30.246425 env[1312]: time="2025-03-17T18:49:30.246214587Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50641627fa83272453dfd311133b0cad905340bcbc807064adba974e59239ebe\"" Mar 17 18:49:30.247869 env[1312]: time="2025-03-17T18:49:30.247816557Z" level=info msg="StartContainer for \"50641627fa83272453dfd311133b0cad905340bcbc807064adba974e59239ebe\"" Mar 17 18:49:30.422317 env[1312]: time="2025-03-17T18:49:30.421901734Z" level=info msg="StartContainer for \"50641627fa83272453dfd311133b0cad905340bcbc807064adba974e59239ebe\" returns successfully" Mar 17 18:49:30.486441 env[1312]: time="2025-03-17T18:49:30.486355234Z" level=info msg="shim disconnected" id=50641627fa83272453dfd311133b0cad905340bcbc807064adba974e59239ebe Mar 17 18:49:30.487026 env[1312]: time="2025-03-17T18:49:30.486984022Z" level=warning msg="cleaning up after shim disconnected" id=50641627fa83272453dfd311133b0cad905340bcbc807064adba974e59239ebe namespace=k8s.io Mar 17 18:49:30.487289 env[1312]: time="2025-03-17T18:49:30.487260358Z" level=info msg="cleaning up dead shim" Mar 17 18:49:30.509560 env[1312]: time="2025-03-17T18:49:30.509463264Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:49:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4440 runtime=io.containerd.runc.v2\n" Mar 17 18:49:31.151575 kubelet[2206]: E0317 18:49:31.151523 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:31.193732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50641627fa83272453dfd311133b0cad905340bcbc807064adba974e59239ebe-rootfs.mount: Deactivated successfully. Mar 17 18:49:31.214706 env[1312]: time="2025-03-17T18:49:31.205079359Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:49:31.249261 env[1312]: time="2025-03-17T18:49:31.237956189Z" level=info msg="CreateContainer within sandbox \"aec2b816269bd251339e9f2fd220eaf5c3dcd812b14b1e256b5b6cef5c3b6739\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e12f0dc98f502da7f9af0821dd9491e2e99d7a0564033fc466b98a57e199a1f\"" Mar 17 18:49:31.249261 env[1312]: time="2025-03-17T18:49:31.242405088Z" level=info msg="StartContainer for \"0e12f0dc98f502da7f9af0821dd9491e2e99d7a0564033fc466b98a57e199a1f\"" Mar 17 18:49:31.388409 kubelet[2206]: E0317 18:49:31.388337 2206 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:49:31.455882 env[1312]: time="2025-03-17T18:49:31.454915839Z" level=info msg="StartContainer for \"0e12f0dc98f502da7f9af0821dd9491e2e99d7a0564033fc466b98a57e199a1f\" returns successfully" Mar 17 18:49:32.161202 kubelet[2206]: E0317 18:49:32.161161 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:32.894248 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:49:32.917785 systemd[1]: run-containerd-runc-k8s.io-0e12f0dc98f502da7f9af0821dd9491e2e99d7a0564033fc466b98a57e199a1f-runc.RFZn41.mount: Deactivated successfully. Mar 17 18:49:33.164178 kubelet[2206]: E0317 18:49:33.164018 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:35.381727 systemd[1]: run-containerd-runc-k8s.io-0e12f0dc98f502da7f9af0821dd9491e2e99d7a0564033fc466b98a57e199a1f-runc.Sq5YPR.mount: Deactivated successfully. Mar 17 18:49:38.034326 systemd-networkd[1080]: lxc_health: Link UP Mar 17 18:49:38.041743 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:49:38.039788 systemd-networkd[1080]: lxc_health: Gained carrier Mar 17 18:49:38.808989 kubelet[2206]: E0317 18:49:38.808935 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:38.858293 kubelet[2206]: I0317 18:49:38.858192 2206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lsq79" podStartSLOduration=12.858167068 podStartE2EDuration="12.858167068s" podCreationTimestamp="2025-03-17 18:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:49:32.288215361 +0000 UTC m=+141.595806519" watchObservedRunningTime="2025-03-17 18:49:38.858167068 +0000 UTC m=+148.165758223" Mar 17 18:49:39.105025 kubelet[2206]: E0317 18:49:39.104861 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:39.197905 kubelet[2206]: E0317 18:49:39.197854 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:39.549353 systemd-networkd[1080]: lxc_health: Gained IPv6LL Mar 17 18:49:40.104439 kubelet[2206]: E0317 18:49:40.104300 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:40.200713 kubelet[2206]: E0317 18:49:40.200653 2206 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Mar 17 18:49:44.542072 sshd[4014]: pam_unix(sshd:session): session closed for user core Mar 17 18:49:44.548197 systemd[1]: sshd@27-134.199.210.138:22-139.178.68.195:41946.service: Deactivated successfully. Mar 17 18:49:44.549572 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:49:44.550278 systemd-logind[1305]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:49:44.551505 systemd-logind[1305]: Removed session 28.