Mar 17 20:41:29.942975 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 20:41:29.943054 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:41:29.943104 kernel: BIOS-provided physical RAM map: Mar 17 20:41:29.943127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 20:41:29.943142 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 20:41:29.943184 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 20:41:29.943202 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 17 20:41:29.943242 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 17 20:41:29.943258 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 20:41:29.943273 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 20:41:29.943337 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 17 20:41:29.943353 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 20:41:29.943397 kernel: NX (Execute Disable) protection: active Mar 17 20:41:29.943414 kernel: SMBIOS 3.0.0 present. Mar 17 20:41:29.943457 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 17 20:41:29.943474 kernel: Hypervisor detected: KVM Mar 17 20:41:29.943490 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 20:41:29.943506 kernel: kvm-clock: cpu 0, msr 4f19a001, primary cpu clock Mar 17 20:41:29.943525 kernel: kvm-clock: using sched offset of 4068376475 cycles Mar 17 20:41:29.943543 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 20:41:29.943559 kernel: tsc: Detected 1996.249 MHz processor Mar 17 20:41:29.943602 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 20:41:29.943657 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 20:41:29.943676 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 17 20:41:29.943692 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 20:41:29.943734 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 17 20:41:29.943752 kernel: ACPI: Early table checksum verification disabled Mar 17 20:41:29.943772 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 17 20:41:29.943789 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:41:29.943806 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:41:29.943849 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:41:29.943866 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 17 20:41:29.943883 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:41:29.943899 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:41:29.943916 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 17 20:41:29.943936 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 17 20:41:29.943952 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 17 20:41:29.943996 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 17 20:41:29.944012 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 17 20:41:29.944029 kernel: No NUMA configuration found Mar 17 20:41:29.944078 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 17 20:41:29.944096 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 17 20:41:29.944115 kernel: Zone ranges: Mar 17 20:41:29.944158 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 20:41:29.944176 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 20:41:29.944219 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 17 20:41:29.944237 kernel: Movable zone start for each node Mar 17 20:41:29.946681 kernel: Early memory node ranges Mar 17 20:41:29.946709 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 20:41:29.946726 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 17 20:41:29.946749 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 17 20:41:29.946766 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 17 20:41:29.946784 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 20:41:29.946801 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 20:41:29.946818 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 17 20:41:29.946835 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 20:41:29.946852 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 20:41:29.946869 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 20:41:29.946886 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 20:41:29.946906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 20:41:29.946924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 20:41:29.946940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 20:41:29.946958 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 20:41:29.946974 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 20:41:29.946991 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 20:41:29.947008 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 17 20:41:29.947025 kernel: Booting paravirtualized kernel on KVM Mar 17 20:41:29.947042 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 20:41:29.947062 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 20:41:29.947079 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 20:41:29.947096 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 20:41:29.947112 kernel: pcpu-alloc: [0] 0 1 Mar 17 20:41:29.947129 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 Mar 17 20:41:29.947146 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 20:41:29.947163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 17 20:41:29.947180 kernel: Policy zone: Normal Mar 17 20:41:29.947203 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:41:29.947226 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 20:41:29.947243 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 20:41:29.947261 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 20:41:29.947321 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 20:41:29.947343 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 225236K reserved, 0K cma-reserved) Mar 17 20:41:29.947360 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 20:41:29.947377 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 20:41:29.947394 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 20:41:29.947416 kernel: rcu: Hierarchical RCU implementation. Mar 17 20:41:29.947434 kernel: rcu: RCU event tracing is enabled. Mar 17 20:41:29.947452 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 20:41:29.947470 kernel: Rude variant of Tasks RCU enabled. Mar 17 20:41:29.947487 kernel: Tracing variant of Tasks RCU enabled. Mar 17 20:41:29.947504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 20:41:29.947521 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 20:41:29.947538 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 20:41:29.947554 kernel: Console: colour VGA+ 80x25 Mar 17 20:41:29.947574 kernel: printk: console [tty0] enabled Mar 17 20:41:29.947591 kernel: printk: console [ttyS0] enabled Mar 17 20:41:29.947608 kernel: ACPI: Core revision 20210730 Mar 17 20:41:29.947643 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 20:41:29.947661 kernel: x2apic enabled Mar 17 20:41:29.947678 kernel: Switched APIC routing to physical x2apic. Mar 17 20:41:29.947695 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 20:41:29.947712 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 20:41:29.947729 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 17 20:41:29.947749 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 20:41:29.947766 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 20:41:29.947784 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 20:41:29.947800 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 20:41:29.947817 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 20:41:29.947834 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 20:41:29.947851 kernel: Speculative Store Bypass: Vulnerable Mar 17 20:41:29.947868 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 17 20:41:29.947884 kernel: Freeing SMP alternatives memory: 32K Mar 17 20:41:29.947904 kernel: pid_max: default: 32768 minimum: 301 Mar 17 20:41:29.947921 kernel: LSM: Security Framework initializing Mar 17 20:41:29.947938 kernel: SELinux: Initializing. Mar 17 20:41:29.947955 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 20:41:29.947972 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 20:41:29.947990 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 17 20:41:29.948017 kernel: Performance Events: AMD PMU driver. Mar 17 20:41:29.948037 kernel: ... version: 0 Mar 17 20:41:29.948054 kernel: ... bit width: 48 Mar 17 20:41:29.948072 kernel: ... generic registers: 4 Mar 17 20:41:29.948089 kernel: ... value mask: 0000ffffffffffff Mar 17 20:41:29.948107 kernel: ... max period: 00007fffffffffff Mar 17 20:41:29.948127 kernel: ... fixed-purpose events: 0 Mar 17 20:41:29.948144 kernel: ... event mask: 000000000000000f Mar 17 20:41:29.948162 kernel: signal: max sigframe size: 1440 Mar 17 20:41:29.948179 kernel: rcu: Hierarchical SRCU implementation. Mar 17 20:41:29.948197 kernel: smp: Bringing up secondary CPUs ... Mar 17 20:41:29.948217 kernel: x86: Booting SMP configuration: Mar 17 20:41:29.948234 kernel: .... node #0, CPUs: #1 Mar 17 20:41:29.948251 kernel: kvm-clock: cpu 1, msr 4f19a041, secondary cpu clock Mar 17 20:41:29.948269 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 Mar 17 20:41:29.948309 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 20:41:29.948327 kernel: smpboot: Max logical packages: 2 Mar 17 20:41:29.948345 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 17 20:41:29.948362 kernel: devtmpfs: initialized Mar 17 20:41:29.948380 kernel: x86/mm: Memory block size: 128MB Mar 17 20:41:29.948401 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 20:41:29.948420 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 20:41:29.948438 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 20:41:29.948456 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 20:41:29.948473 kernel: audit: initializing netlink subsys (disabled) Mar 17 20:41:29.948491 kernel: audit: type=2000 audit(1742244090.154:1): state=initialized audit_enabled=0 res=1 Mar 17 20:41:29.948508 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 20:41:29.948526 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 20:41:29.948543 kernel: cpuidle: using governor menu Mar 17 20:41:29.948564 kernel: ACPI: bus type PCI registered Mar 17 20:41:29.948581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 20:41:29.948599 kernel: dca service started, version 1.12.1 Mar 17 20:41:29.948617 kernel: PCI: Using configuration type 1 for base access Mar 17 20:41:29.948634 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 20:41:29.948652 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 20:41:29.948670 kernel: ACPI: Added _OSI(Module Device) Mar 17 20:41:29.948688 kernel: ACPI: Added _OSI(Processor Device) Mar 17 20:41:29.948705 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 20:41:29.948725 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 20:41:29.948743 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 20:41:29.948760 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 20:41:29.948778 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 20:41:29.948796 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 20:41:29.948813 kernel: ACPI: Interpreter enabled Mar 17 20:41:29.948831 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 20:41:29.948848 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 20:41:29.948866 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 20:41:29.948887 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 20:41:29.948904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 20:41:29.949185 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 20:41:29.949409 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 20:41:29.949439 kernel: acpiphp: Slot [3] registered Mar 17 20:41:29.949457 kernel: acpiphp: Slot [4] registered Mar 17 20:41:29.949474 kernel: acpiphp: Slot [5] registered Mar 17 20:41:29.949492 kernel: acpiphp: Slot [6] registered Mar 17 20:41:29.949516 kernel: acpiphp: Slot [7] registered Mar 17 20:41:29.949533 kernel: acpiphp: Slot [8] registered Mar 17 20:41:29.949550 kernel: acpiphp: Slot [9] registered Mar 17 20:41:29.949567 kernel: acpiphp: Slot [10] registered Mar 17 20:41:29.949585 kernel: acpiphp: Slot [11] registered Mar 17 20:41:29.949602 kernel: acpiphp: Slot [12] registered Mar 17 20:41:29.949619 kernel: acpiphp: Slot [13] registered Mar 17 20:41:29.949637 kernel: acpiphp: Slot [14] registered Mar 17 20:41:29.949654 kernel: acpiphp: Slot [15] registered Mar 17 20:41:29.949674 kernel: acpiphp: Slot [16] registered Mar 17 20:41:29.949692 kernel: acpiphp: Slot [17] registered Mar 17 20:41:29.949709 kernel: acpiphp: Slot [18] registered Mar 17 20:41:29.949727 kernel: acpiphp: Slot [19] registered Mar 17 20:41:29.949744 kernel: acpiphp: Slot [20] registered Mar 17 20:41:29.949761 kernel: acpiphp: Slot [21] registered Mar 17 20:41:29.949778 kernel: acpiphp: Slot [22] registered Mar 17 20:41:29.949796 kernel: acpiphp: Slot [23] registered Mar 17 20:41:29.949813 kernel: acpiphp: Slot [24] registered Mar 17 20:41:29.949830 kernel: acpiphp: Slot [25] registered Mar 17 20:41:29.949850 kernel: acpiphp: Slot [26] registered Mar 17 20:41:29.949868 kernel: acpiphp: Slot [27] registered Mar 17 20:41:29.949885 kernel: acpiphp: Slot [28] registered Mar 17 20:41:29.949902 kernel: acpiphp: Slot [29] registered Mar 17 20:41:29.949920 kernel: acpiphp: Slot [30] registered Mar 17 20:41:29.949937 kernel: acpiphp: Slot [31] registered Mar 17 20:41:29.949954 kernel: PCI host bridge to bus 0000:00 Mar 17 20:41:29.950135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 20:41:29.950340 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 20:41:29.950507 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 20:41:29.950665 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 20:41:29.950821 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 17 20:41:29.950978 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 20:41:29.951182 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 20:41:29.955445 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 20:41:29.955674 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 20:41:29.955846 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 17 20:41:29.956043 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 20:41:29.956206 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 20:41:29.956402 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 20:41:29.956564 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 20:41:29.956745 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 20:41:29.956904 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 20:41:29.956987 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 20:41:29.957082 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 20:41:29.957165 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 20:41:29.957251 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 17 20:41:29.959536 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 17 20:41:29.959643 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 17 20:41:29.959729 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 20:41:29.959824 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 20:41:29.959907 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 17 20:41:29.959988 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 17 20:41:29.960070 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 17 20:41:29.960153 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 17 20:41:29.960248 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 20:41:29.960349 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 20:41:29.960434 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 17 20:41:29.960516 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 17 20:41:29.960607 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 20:41:29.960689 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 17 20:41:29.960771 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 17 20:41:29.960865 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 20:41:29.960947 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 17 20:41:29.961027 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 17 20:41:29.961109 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 17 20:41:29.961121 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 20:41:29.961129 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 20:41:29.961138 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 20:41:29.961148 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 20:41:29.961157 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 20:41:29.961165 kernel: iommu: Default domain type: Translated Mar 17 20:41:29.961174 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 20:41:29.961254 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 20:41:29.963425 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 20:41:29.963513 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 20:41:29.963526 kernel: vgaarb: loaded Mar 17 20:41:29.963535 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 20:41:29.963547 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 20:41:29.963556 kernel: PTP clock support registered Mar 17 20:41:29.963564 kernel: PCI: Using ACPI for IRQ routing Mar 17 20:41:29.963572 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 20:41:29.963581 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 20:41:29.963589 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 17 20:41:29.963597 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 20:41:29.963605 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 20:41:29.963613 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 20:41:29.963635 kernel: pnp: PnP ACPI init Mar 17 20:41:29.963722 kernel: pnp 00:03: [dma 2] Mar 17 20:41:29.963735 kernel: pnp: PnP ACPI: found 5 devices Mar 17 20:41:29.963743 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 20:41:29.963752 kernel: NET: Registered PF_INET protocol family Mar 17 20:41:29.963760 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 20:41:29.963768 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 20:41:29.963776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 20:41:29.963787 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 20:41:29.963796 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 20:41:29.963804 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 20:41:29.963812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 20:41:29.963820 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 20:41:29.963828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 20:41:29.963836 kernel: NET: Registered PF_XDP protocol family Mar 17 20:41:29.963909 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 20:41:29.963980 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 20:41:29.964054 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 20:41:29.964125 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 17 20:41:29.964195 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 17 20:41:29.964778 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 20:41:29.964880 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 20:41:29.964962 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 17 20:41:29.964974 kernel: PCI: CLS 0 bytes, default 64 Mar 17 20:41:29.964983 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 20:41:29.964995 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 17 20:41:29.965003 kernel: Initialise system trusted keyrings Mar 17 20:41:29.965012 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 20:41:29.965020 kernel: Key type asymmetric registered Mar 17 20:41:29.965028 kernel: Asymmetric key parser 'x509' registered Mar 17 20:41:29.965036 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 20:41:29.965045 kernel: io scheduler mq-deadline registered Mar 17 20:41:29.965053 kernel: io scheduler kyber registered Mar 17 20:41:29.965061 kernel: io scheduler bfq registered Mar 17 20:41:29.965070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 20:41:29.965079 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 20:41:29.965087 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 20:41:29.965096 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 20:41:29.965104 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 20:41:29.965112 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 20:41:29.965120 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 20:41:29.965128 kernel: random: crng init done Mar 17 20:41:29.965137 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 20:41:29.965146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 20:41:29.965154 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 20:41:29.965163 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 20:41:29.965244 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 20:41:29.965337 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 20:41:29.965412 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T20:41:29 UTC (1742244089) Mar 17 20:41:29.965484 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 20:41:29.965495 kernel: NET: Registered PF_INET6 protocol family Mar 17 20:41:29.965507 kernel: Segment Routing with IPv6 Mar 17 20:41:29.965515 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 20:41:29.965523 kernel: NET: Registered PF_PACKET protocol family Mar 17 20:41:29.965531 kernel: Key type dns_resolver registered Mar 17 20:41:29.965539 kernel: IPI shorthand broadcast: enabled Mar 17 20:41:29.965548 kernel: sched_clock: Marking stable (858696141, 158797683)->(1085023702, -67529878) Mar 17 20:41:29.965556 kernel: registered taskstats version 1 Mar 17 20:41:29.965564 kernel: Loading compiled-in X.509 certificates Mar 17 20:41:29.965573 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 20:41:29.965582 kernel: Key type .fscrypt registered Mar 17 20:41:29.965590 kernel: Key type fscrypt-provisioning registered Mar 17 20:41:29.965598 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 20:41:29.965606 kernel: ima: Allocated hash algorithm: sha1 Mar 17 20:41:29.965614 kernel: ima: No architecture policies found Mar 17 20:41:29.965622 kernel: clk: Disabling unused clocks Mar 17 20:41:29.965630 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 20:41:29.965638 kernel: Write protecting the kernel read-only data: 28672k Mar 17 20:41:29.965648 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 20:41:29.965656 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 20:41:29.965664 kernel: Run /init as init process Mar 17 20:41:29.965672 kernel: with arguments: Mar 17 20:41:29.965680 kernel: /init Mar 17 20:41:29.965688 kernel: with environment: Mar 17 20:41:29.965696 kernel: HOME=/ Mar 17 20:41:29.965704 kernel: TERM=linux Mar 17 20:41:29.965711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 20:41:29.965723 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 20:41:29.965735 systemd[1]: Detected virtualization kvm. Mar 17 20:41:29.965744 systemd[1]: Detected architecture x86-64. Mar 17 20:41:29.965753 systemd[1]: Running in initrd. Mar 17 20:41:29.965762 systemd[1]: No hostname configured, using default hostname. Mar 17 20:41:29.965770 systemd[1]: Hostname set to . Mar 17 20:41:29.965780 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:41:29.965790 systemd[1]: Queued start job for default target initrd.target. Mar 17 20:41:29.965799 systemd[1]: Started systemd-ask-password-console.path. Mar 17 20:41:29.965807 systemd[1]: Reached target cryptsetup.target. Mar 17 20:41:29.965816 systemd[1]: Reached target paths.target. Mar 17 20:41:29.965824 systemd[1]: Reached target slices.target. Mar 17 20:41:29.965832 systemd[1]: Reached target swap.target. Mar 17 20:41:29.965841 systemd[1]: Reached target timers.target. Mar 17 20:41:29.965850 systemd[1]: Listening on iscsid.socket. Mar 17 20:41:29.965860 systemd[1]: Listening on iscsiuio.socket. Mar 17 20:41:29.965876 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 20:41:29.965886 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 20:41:29.965895 systemd[1]: Listening on systemd-journald.socket. Mar 17 20:41:29.965904 systemd[1]: Listening on systemd-networkd.socket. Mar 17 20:41:29.965913 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 20:41:29.965924 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 20:41:29.965933 systemd[1]: Reached target sockets.target. Mar 17 20:41:29.965941 systemd[1]: Starting kmod-static-nodes.service... Mar 17 20:41:29.965950 systemd[1]: Finished network-cleanup.service. Mar 17 20:41:29.965959 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 20:41:29.965968 systemd[1]: Starting systemd-journald.service... Mar 17 20:41:29.965977 systemd[1]: Starting systemd-modules-load.service... Mar 17 20:41:29.965986 systemd[1]: Starting systemd-resolved.service... Mar 17 20:41:29.965995 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 20:41:29.966005 systemd[1]: Finished kmod-static-nodes.service. Mar 17 20:41:29.966014 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 20:41:29.966027 systemd-journald[186]: Journal started Mar 17 20:41:29.966072 systemd-journald[186]: Runtime Journal (/run/log/journal/9573a8af9e71474da22923867e1a2b6e) is 8.0M, max 78.4M, 70.4M free. Mar 17 20:41:29.953886 systemd-modules-load[187]: Inserted module 'overlay' Mar 17 20:41:29.990648 systemd[1]: Started systemd-journald.service. Mar 17 20:41:29.990673 kernel: audit: type=1130 audit(1742244089.979:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:29.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:29.960716 systemd-resolved[188]: Positive Trust Anchors: Mar 17 20:41:29.960729 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:41:29.960764 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 20:41:30.002429 kernel: audit: type=1130 audit(1742244089.994:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:29.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:29.972426 systemd-resolved[188]: Defaulting to hostname 'linux'. Mar 17 20:41:30.016946 kernel: audit: type=1130 audit(1742244089.995:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.016964 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 20:41:30.016977 kernel: audit: type=1130 audit(1742244090.009:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:29.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:29.994961 systemd[1]: Started systemd-resolved.service. Mar 17 20:41:29.995507 systemd[1]: Reached target nss-lookup.target. Mar 17 20:41:30.023427 kernel: audit: type=1130 audit(1742244090.017:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.001857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 20:41:30.008230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 20:41:30.012378 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 20:41:30.018393 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 20:41:30.029634 systemd-modules-load[187]: Inserted module 'br_netfilter' Mar 17 20:41:30.030312 kernel: Bridge firewalling registered Mar 17 20:41:30.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.042079 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 20:41:30.048148 kernel: audit: type=1130 audit(1742244090.042:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.043359 systemd[1]: Starting dracut-cmdline.service... Mar 17 20:41:30.054192 dracut-cmdline[203]: dracut-dracut-053 Mar 17 20:41:30.057071 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:41:30.059399 kernel: SCSI subsystem initialized Mar 17 20:41:30.075187 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 20:41:30.075217 kernel: device-mapper: uevent: version 1.0.3 Mar 17 20:41:30.078304 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 20:41:30.083250 systemd-modules-load[187]: Inserted module 'dm_multipath' Mar 17 20:41:30.084554 systemd[1]: Finished systemd-modules-load.service. Mar 17 20:41:30.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.085878 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:41:30.091397 kernel: audit: type=1130 audit(1742244090.084:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.097612 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:41:30.103539 kernel: audit: type=1130 audit(1742244090.098:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.140312 kernel: Loading iSCSI transport class v2.0-870. Mar 17 20:41:30.159295 kernel: iscsi: registered transport (tcp) Mar 17 20:41:30.187117 kernel: iscsi: registered transport (qla4xxx) Mar 17 20:41:30.187197 kernel: QLogic iSCSI HBA Driver Mar 17 20:41:30.241554 systemd[1]: Finished dracut-cmdline.service. Mar 17 20:41:30.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.245467 systemd[1]: Starting dracut-pre-udev.service... Mar 17 20:41:30.249423 kernel: audit: type=1130 audit(1742244090.242:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.331427 kernel: raid6: sse2x4 gen() 9556 MB/s Mar 17 20:41:30.349441 kernel: raid6: sse2x4 xor() 4992 MB/s Mar 17 20:41:30.367417 kernel: raid6: sse2x2 gen() 13656 MB/s Mar 17 20:41:30.386901 kernel: raid6: sse2x2 xor() 7273 MB/s Mar 17 20:41:30.404331 kernel: raid6: sse2x1 gen() 8827 MB/s Mar 17 20:41:30.422639 kernel: raid6: sse2x1 xor() 6577 MB/s Mar 17 20:41:30.422678 kernel: raid6: using algorithm sse2x2 gen() 13656 MB/s Mar 17 20:41:30.422699 kernel: raid6: .... xor() 7273 MB/s, rmw enabled Mar 17 20:41:30.423970 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 20:41:30.443902 kernel: xor: measuring software checksum speed Mar 17 20:41:30.443941 kernel: prefetch64-sse : 15650 MB/sec Mar 17 20:41:30.445214 kernel: generic_sse : 15344 MB/sec Mar 17 20:41:30.445229 kernel: xor: using function: prefetch64-sse (15650 MB/sec) Mar 17 20:41:30.562364 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 20:41:30.578061 systemd[1]: Finished dracut-pre-udev.service. Mar 17 20:41:30.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.579000 audit: BPF prog-id=7 op=LOAD Mar 17 20:41:30.579000 audit: BPF prog-id=8 op=LOAD Mar 17 20:41:30.580222 systemd[1]: Starting systemd-udevd.service... Mar 17 20:41:30.593752 systemd-udevd[385]: Using default interface naming scheme 'v252'. Mar 17 20:41:30.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.598385 systemd[1]: Started systemd-udevd.service. Mar 17 20:41:30.599558 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 20:41:30.612881 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Mar 17 20:41:30.644541 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 20:41:30.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.645817 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 20:41:30.707771 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 20:41:30.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:30.786763 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 17 20:41:30.816612 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 20:41:30.816638 kernel: GPT:17805311 != 20971519 Mar 17 20:41:30.816657 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 20:41:30.816672 kernel: GPT:17805311 != 20971519 Mar 17 20:41:30.816691 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 20:41:30.816703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:41:30.821321 kernel: libata version 3.00 loaded. Mar 17 20:41:30.825302 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 20:41:30.828621 kernel: scsi host0: ata_piix Mar 17 20:41:30.828763 kernel: scsi host1: ata_piix Mar 17 20:41:30.828888 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 17 20:41:30.828903 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 17 20:41:30.842311 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) Mar 17 20:41:30.853017 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 20:41:30.906717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 20:41:30.923578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 20:41:30.931081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 20:41:30.932505 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 20:41:30.935590 systemd[1]: Starting disk-uuid.service... Mar 17 20:41:30.945643 disk-uuid[465]: Primary Header is updated. Mar 17 20:41:30.945643 disk-uuid[465]: Secondary Entries is updated. Mar 17 20:41:30.945643 disk-uuid[465]: Secondary Header is updated. Mar 17 20:41:30.959148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:41:30.969337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:41:32.058358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:41:32.058834 disk-uuid[466]: The operation has completed successfully. Mar 17 20:41:32.315000 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 20:41:32.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.315199 systemd[1]: Finished disk-uuid.service. Mar 17 20:41:32.342864 systemd[1]: Starting verity-setup.service... Mar 17 20:41:32.381353 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 17 20:41:32.494363 systemd[1]: Found device dev-mapper-usr.device. Mar 17 20:41:32.498160 systemd[1]: Mounting sysusr-usr.mount... Mar 17 20:41:32.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.500934 systemd[1]: Finished verity-setup.service. Mar 17 20:41:32.633155 systemd[1]: Mounted sysusr-usr.mount. Mar 17 20:41:32.634408 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 20:41:32.633907 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 20:41:32.634847 systemd[1]: Starting ignition-setup.service... Mar 17 20:41:32.639800 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 20:41:32.651312 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:41:32.651371 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:41:32.651388 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:41:32.669582 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 20:41:32.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.683726 systemd[1]: Finished ignition-setup.service. Mar 17 20:41:32.685316 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 20:41:32.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.735215 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 20:41:32.736000 audit: BPF prog-id=9 op=LOAD Mar 17 20:41:32.737609 systemd[1]: Starting systemd-networkd.service... Mar 17 20:41:32.764020 systemd-networkd[635]: lo: Link UP Mar 17 20:41:32.764793 systemd-networkd[635]: lo: Gained carrier Mar 17 20:41:32.765803 systemd-networkd[635]: Enumeration completed Mar 17 20:41:32.766109 systemd-networkd[635]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:41:32.767646 systemd[1]: Started systemd-networkd.service. Mar 17 20:41:32.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.768005 systemd-networkd[635]: eth0: Link UP Mar 17 20:41:32.768009 systemd-networkd[635]: eth0: Gained carrier Mar 17 20:41:32.769327 systemd[1]: Reached target network.target. Mar 17 20:41:32.771701 systemd[1]: Starting iscsiuio.service... Mar 17 20:41:32.780843 systemd[1]: Started iscsiuio.service. Mar 17 20:41:32.782208 systemd[1]: Starting iscsid.service... Mar 17 20:41:32.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.782649 systemd-networkd[635]: eth0: DHCPv4 address 172.24.4.253/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 20:41:32.788432 iscsid[640]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 20:41:32.788432 iscsid[640]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 20:41:32.788432 iscsid[640]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 20:41:32.788432 iscsid[640]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 20:41:32.788432 iscsid[640]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 20:41:32.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.789873 systemd[1]: Started iscsid.service. Mar 17 20:41:32.806664 iscsid[640]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 20:41:32.791221 systemd[1]: Starting dracut-initqueue.service... Mar 17 20:41:32.803149 systemd[1]: Finished dracut-initqueue.service. Mar 17 20:41:32.804526 systemd[1]: Reached target remote-fs-pre.target. Mar 17 20:41:32.806133 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 20:41:32.807596 systemd[1]: Reached target remote-fs.target. Mar 17 20:41:32.810331 systemd[1]: Starting dracut-pre-mount.service... Mar 17 20:41:32.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.820757 systemd[1]: Finished dracut-pre-mount.service. Mar 17 20:41:32.990796 ignition[572]: Ignition 2.14.0 Mar 17 20:41:32.990830 ignition[572]: Stage: fetch-offline Mar 17 20:41:32.990932 ignition[572]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:32.990985 ignition[572]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:32.996328 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 20:41:32.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:32.993352 ignition[572]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:32.993562 ignition[572]: parsed url from cmdline: "" Mar 17 20:41:32.999485 systemd[1]: Starting ignition-fetch.service... Mar 17 20:41:32.993571 ignition[572]: no config URL provided Mar 17 20:41:32.993584 ignition[572]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:41:32.993611 ignition[572]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:41:32.993622 ignition[572]: failed to fetch config: resource requires networking Mar 17 20:41:32.994367 ignition[572]: Ignition finished successfully Mar 17 20:41:33.016624 ignition[659]: Ignition 2.14.0 Mar 17 20:41:33.016647 ignition[659]: Stage: fetch Mar 17 20:41:33.016905 ignition[659]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:33.016946 ignition[659]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:33.019188 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:33.019459 ignition[659]: parsed url from cmdline: "" Mar 17 20:41:33.019468 ignition[659]: no config URL provided Mar 17 20:41:33.019481 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:41:33.019501 ignition[659]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:41:33.031585 ignition[659]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 20:41:33.031650 ignition[659]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 20:41:33.034821 ignition[659]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 20:41:33.475030 ignition[659]: GET result: OK Mar 17 20:41:33.475253 ignition[659]: parsing config with SHA512: 1ca63c57b2f161292bd8abba3ec39e426b2da645e7348a65f0f4937498dbaa7940ca57035987e645096c911511adda25921fb116d521917c659918adfd92dddb Mar 17 20:41:33.498234 unknown[659]: fetched base config from "system" Mar 17 20:41:33.499329 unknown[659]: fetched base config from "system" Mar 17 20:41:33.499339 unknown[659]: fetched user config from "openstack" Mar 17 20:41:33.499940 ignition[659]: fetch: fetch complete Mar 17 20:41:33.501458 systemd[1]: Finished ignition-fetch.service. Mar 17 20:41:33.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.499946 ignition[659]: fetch: fetch passed Mar 17 20:41:33.504098 systemd[1]: Starting ignition-kargs.service... Mar 17 20:41:33.500008 ignition[659]: Ignition finished successfully Mar 17 20:41:33.524616 ignition[665]: Ignition 2.14.0 Mar 17 20:41:33.524627 ignition[665]: Stage: kargs Mar 17 20:41:33.524744 ignition[665]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:33.528543 systemd[1]: Finished ignition-kargs.service. Mar 17 20:41:33.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.524765 ignition[665]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:33.530324 systemd[1]: Starting ignition-disks.service... Mar 17 20:41:33.525815 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:33.527091 ignition[665]: kargs: kargs passed Mar 17 20:41:33.527143 ignition[665]: Ignition finished successfully Mar 17 20:41:33.539229 ignition[671]: Ignition 2.14.0 Mar 17 20:41:33.539243 ignition[671]: Stage: disks Mar 17 20:41:33.539372 ignition[671]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:33.539393 ignition[671]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:33.540410 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:33.541683 ignition[671]: disks: disks passed Mar 17 20:41:33.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.543727 systemd[1]: Finished ignition-disks.service. Mar 17 20:41:33.541744 ignition[671]: Ignition finished successfully Mar 17 20:41:33.545129 systemd[1]: Reached target initrd-root-device.target. Mar 17 20:41:33.546536 systemd[1]: Reached target local-fs-pre.target. Mar 17 20:41:33.548148 systemd[1]: Reached target local-fs.target. Mar 17 20:41:33.549939 systemd[1]: Reached target sysinit.target. Mar 17 20:41:33.551511 systemd[1]: Reached target basic.target. Mar 17 20:41:33.554651 systemd[1]: Starting systemd-fsck-root.service... Mar 17 20:41:33.578692 systemd-fsck[678]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 20:41:33.586811 systemd[1]: Finished systemd-fsck-root.service. Mar 17 20:41:33.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.589778 systemd[1]: Mounting sysroot.mount... Mar 17 20:41:33.614335 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 20:41:33.615531 systemd[1]: Mounted sysroot.mount. Mar 17 20:41:33.618140 systemd[1]: Reached target initrd-root-fs.target. Mar 17 20:41:33.624570 systemd[1]: Mounting sysroot-usr.mount... Mar 17 20:41:33.628213 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 20:41:33.631857 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 17 20:41:33.634844 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 20:41:33.635827 systemd[1]: Reached target ignition-diskful.target. Mar 17 20:41:33.644602 systemd[1]: Mounted sysroot-usr.mount. Mar 17 20:41:33.653394 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 20:41:33.658420 systemd[1]: Starting initrd-setup-root.service... Mar 17 20:41:33.672364 initrd-setup-root[690]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 20:41:33.690310 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (685) Mar 17 20:41:33.694624 initrd-setup-root[698]: cut: /sysroot/etc/group: No such file or directory Mar 17 20:41:33.701243 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:41:33.701305 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:41:33.701318 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:41:33.702437 initrd-setup-root[706]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 20:41:33.706955 initrd-setup-root[730]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 20:41:33.715002 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 20:41:33.777174 systemd[1]: Finished initrd-setup-root.service. Mar 17 20:41:33.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.778569 systemd[1]: Starting ignition-mount.service... Mar 17 20:41:33.779543 systemd[1]: Starting sysroot-boot.service... Mar 17 20:41:33.788785 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 20:41:33.788901 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 20:41:33.803039 ignition[753]: INFO : Ignition 2.14.0 Mar 17 20:41:33.803039 ignition[753]: INFO : Stage: mount Mar 17 20:41:33.804437 ignition[753]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:33.804437 ignition[753]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:33.804437 ignition[753]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:33.807259 ignition[753]: INFO : mount: mount passed Mar 17 20:41:33.807259 ignition[753]: INFO : Ignition finished successfully Mar 17 20:41:33.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.807086 systemd[1]: Finished ignition-mount.service. Mar 17 20:41:33.821200 systemd[1]: Finished sysroot-boot.service. Mar 17 20:41:33.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.845769 coreos-metadata[684]: Mar 17 20:41:33.845 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 20:41:33.863903 coreos-metadata[684]: Mar 17 20:41:33.863 INFO Fetch successful Mar 17 20:41:33.863903 coreos-metadata[684]: Mar 17 20:41:33.863 INFO wrote hostname ci-3510-3-7-0-2f3ee5d9b1.novalocal to /sysroot/etc/hostname Mar 17 20:41:33.868629 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 20:41:33.868800 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 17 20:41:33.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:33.871907 systemd[1]: Starting ignition-files.service... Mar 17 20:41:33.878234 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 20:41:33.895367 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (763) Mar 17 20:41:33.903044 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:41:33.903103 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:41:33.903130 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:41:33.917003 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 20:41:33.937545 ignition[782]: INFO : Ignition 2.14.0 Mar 17 20:41:33.937545 ignition[782]: INFO : Stage: files Mar 17 20:41:33.940536 ignition[782]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:33.940536 ignition[782]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:33.945743 ignition[782]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:33.945743 ignition[782]: DEBUG : files: compiled without relabeling support, skipping Mar 17 20:41:33.949842 ignition[782]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 20:41:33.949842 ignition[782]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 20:41:33.954408 ignition[782]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 20:41:33.954408 ignition[782]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 20:41:33.958958 ignition[782]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 20:41:33.957834 unknown[782]: wrote ssh authorized keys file for user: core Mar 17 20:41:33.962943 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 20:41:33.962943 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 20:41:34.044280 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 20:41:34.346650 systemd-networkd[635]: eth0: Gained IPv6LL Mar 17 20:41:34.539966 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 20:41:34.542675 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:41:34.542675 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 20:41:35.209208 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 20:41:35.666579 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:41:35.667647 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 20:41:35.668728 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 20:41:35.669614 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:41:35.670703 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:41:35.671676 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:41:35.671676 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:41:35.671676 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:41:35.671676 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:41:35.678991 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:41:35.678991 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:41:35.678991 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:41:35.678991 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:41:35.678991 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:41:35.678991 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 20:41:36.252024 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 20:41:38.837524 ignition[782]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 20:41:38.838988 ignition[782]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 20:41:38.838988 ignition[782]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 20:41:38.838988 ignition[782]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 20:41:38.841380 ignition[782]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 20:41:38.851460 ignition[782]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:41:38.851460 ignition[782]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:41:38.851460 ignition[782]: INFO : files: files passed Mar 17 20:41:38.851460 ignition[782]: INFO : Ignition finished successfully Mar 17 20:41:38.869401 kernel: kauditd_printk_skb: 27 callbacks suppressed Mar 17 20:41:38.869434 kernel: audit: type=1130 audit(1742244098.855:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.853590 systemd[1]: Finished ignition-files.service. Mar 17 20:41:38.882257 kernel: audit: type=1130 audit(1742244098.870:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.882304 kernel: audit: type=1131 audit(1742244098.876:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.856563 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 20:41:38.888741 kernel: audit: type=1130 audit(1742244098.882:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.888845 initrd-setup-root-after-ignition[807]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 20:41:38.865755 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 20:41:38.866837 systemd[1]: Starting ignition-quench.service... Mar 17 20:41:38.870168 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 20:41:38.870302 systemd[1]: Finished ignition-quench.service. Mar 17 20:41:38.876836 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 20:41:38.883419 systemd[1]: Reached target ignition-complete.target. Mar 17 20:41:38.890355 systemd[1]: Starting initrd-parse-etc.service... Mar 17 20:41:38.911127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 20:41:38.911254 systemd[1]: Finished initrd-parse-etc.service. Mar 17 20:41:38.940458 kernel: audit: type=1130 audit(1742244098.913:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.940516 kernel: audit: type=1131 audit(1742244098.913:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.914091 systemd[1]: Reached target initrd-fs.target. Mar 17 20:41:38.940904 systemd[1]: Reached target initrd.target. Mar 17 20:41:38.942824 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 20:41:38.943918 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 20:41:38.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.964120 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 20:41:38.971417 kernel: audit: type=1130 audit(1742244098.964:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.971754 systemd[1]: Starting initrd-cleanup.service... Mar 17 20:41:38.983764 systemd[1]: Stopped target nss-lookup.target. Mar 17 20:41:38.985131 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 20:41:38.985760 systemd[1]: Stopped target timers.target. Mar 17 20:41:38.986818 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 20:41:38.994731 kernel: audit: type=1131 audit(1742244098.987:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:38.986936 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 20:41:38.988244 systemd[1]: Stopped target initrd.target. Mar 17 20:41:38.995271 systemd[1]: Stopped target basic.target. Mar 17 20:41:38.996423 systemd[1]: Stopped target ignition-complete.target. Mar 17 20:41:38.997599 systemd[1]: Stopped target ignition-diskful.target. Mar 17 20:41:38.998627 systemd[1]: Stopped target initrd-root-device.target. Mar 17 20:41:39.000045 systemd[1]: Stopped target remote-fs.target. Mar 17 20:41:39.001088 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 20:41:39.002201 systemd[1]: Stopped target sysinit.target. Mar 17 20:41:39.003314 systemd[1]: Stopped target local-fs.target. Mar 17 20:41:39.004457 systemd[1]: Stopped target local-fs-pre.target. Mar 17 20:41:39.006387 systemd[1]: Stopped target swap.target. Mar 17 20:41:39.015642 kernel: audit: type=1131 audit(1742244099.009:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.007843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 20:41:39.008187 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 20:41:39.024904 kernel: audit: type=1131 audit(1742244099.018:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.009849 systemd[1]: Stopped target cryptsetup.target. Mar 17 20:41:39.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.016965 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 20:41:39.017334 systemd[1]: Stopped dracut-initqueue.service. Mar 17 20:41:39.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.018995 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 20:41:39.019406 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 20:41:39.026661 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 20:41:39.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.042538 iscsid[640]: iscsid shutting down. Mar 17 20:41:39.027083 systemd[1]: Stopped ignition-files.service. Mar 17 20:41:39.030937 systemd[1]: Stopping ignition-mount.service... Mar 17 20:41:39.033087 systemd[1]: Stopping iscsid.service... Mar 17 20:41:39.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.037276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 20:41:39.037526 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 20:41:39.061786 ignition[820]: INFO : Ignition 2.14.0 Mar 17 20:41:39.061786 ignition[820]: INFO : Stage: umount Mar 17 20:41:39.061786 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:41:39.061786 ignition[820]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:41:39.039269 systemd[1]: Stopping sysroot-boot.service... Mar 17 20:41:39.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.069559 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:41:39.069559 ignition[820]: INFO : umount: umount passed Mar 17 20:41:39.069559 ignition[820]: INFO : Ignition finished successfully Mar 17 20:41:39.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.039900 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 20:41:39.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.040062 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 20:41:39.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.040858 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 20:41:39.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.040997 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 20:41:39.047299 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 20:41:39.051082 systemd[1]: Stopped iscsid.service. Mar 17 20:41:39.054985 systemd[1]: Stopping iscsiuio.service... Mar 17 20:41:39.066346 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 20:41:39.066446 systemd[1]: Stopped iscsiuio.service. Mar 17 20:41:39.069191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 20:41:39.069266 systemd[1]: Finished initrd-cleanup.service. Mar 17 20:41:39.070189 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 20:41:39.070269 systemd[1]: Stopped ignition-mount.service. Mar 17 20:41:39.073165 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 20:41:39.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.073223 systemd[1]: Stopped ignition-disks.service. Mar 17 20:41:39.073885 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 20:41:39.073931 systemd[1]: Stopped ignition-kargs.service. Mar 17 20:41:39.075054 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 20:41:39.075093 systemd[1]: Stopped ignition-fetch.service. Mar 17 20:41:39.076071 systemd[1]: Stopped target network.target. Mar 17 20:41:39.083389 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 20:41:39.083466 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 20:41:39.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.084251 systemd[1]: Stopped target paths.target. Mar 17 20:41:39.085429 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 20:41:39.087372 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 20:41:39.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.088338 systemd[1]: Stopped target slices.target. Mar 17 20:41:39.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.089335 systemd[1]: Stopped target sockets.target. Mar 17 20:41:39.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.090407 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 20:41:39.090448 systemd[1]: Closed iscsid.socket. Mar 17 20:41:39.090899 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 20:41:39.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.090929 systemd[1]: Closed iscsiuio.socket. Mar 17 20:41:39.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.091396 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 20:41:39.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.091442 systemd[1]: Stopped ignition-setup.service. Mar 17 20:41:39.092756 systemd[1]: Stopping systemd-networkd.service... Mar 17 20:41:39.093985 systemd[1]: Stopping systemd-resolved.service... Mar 17 20:41:39.095998 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 20:41:39.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.096330 systemd-networkd[635]: eth0: DHCPv6 lease lost Mar 17 20:41:39.111000 audit: BPF prog-id=9 op=UNLOAD Mar 17 20:41:39.096494 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 20:41:39.113000 audit: BPF prog-id=6 op=UNLOAD Mar 17 20:41:39.096585 systemd[1]: Stopped sysroot-boot.service. Mar 17 20:41:39.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.097176 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 20:41:39.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.097222 systemd[1]: Stopped initrd-setup-root.service. Mar 17 20:41:39.097881 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 20:41:39.097966 systemd[1]: Stopped systemd-networkd.service. Mar 17 20:41:39.099253 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 20:41:39.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.099339 systemd[1]: Closed systemd-networkd.socket. Mar 17 20:41:39.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.100851 systemd[1]: Stopping network-cleanup.service... Mar 17 20:41:39.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.102962 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 20:41:39.103012 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 20:41:39.104031 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:41:39.104069 systemd[1]: Stopped systemd-sysctl.service. Mar 17 20:41:39.105406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 20:41:39.105451 systemd[1]: Stopped systemd-modules-load.service. Mar 17 20:41:39.106360 systemd[1]: Stopping systemd-udevd.service... Mar 17 20:41:39.108338 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 20:41:39.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.108817 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 20:41:39.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.108948 systemd[1]: Stopped systemd-resolved.service. Mar 17 20:41:39.113009 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 20:41:39.113156 systemd[1]: Stopped systemd-udevd.service. Mar 17 20:41:39.114829 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 20:41:39.114921 systemd[1]: Stopped network-cleanup.service. Mar 17 20:41:39.115670 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 20:41:39.115708 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 20:41:39.116676 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 20:41:39.116708 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 20:41:39.117783 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 20:41:39.117825 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 20:41:39.119390 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 20:41:39.119432 systemd[1]: Stopped dracut-cmdline.service. Mar 17 20:41:39.120491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 20:41:39.120532 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 20:41:39.122207 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 20:41:39.129320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 20:41:39.129390 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 20:41:39.130679 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 20:41:39.130771 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 20:41:39.131841 systemd[1]: Reached target initrd-switch-root.target. Mar 17 20:41:39.133580 systemd[1]: Starting initrd-switch-root.service... Mar 17 20:41:39.151212 systemd[1]: Switching root. Mar 17 20:41:39.172180 systemd-journald[186]: Journal stopped Mar 17 20:41:43.527992 systemd-journald[186]: Received SIGTERM from PID 1 (n/a). Mar 17 20:41:43.528039 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 20:41:43.528059 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 20:41:43.528070 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 20:41:43.528081 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 20:41:43.528092 kernel: SELinux: policy capability open_perms=1 Mar 17 20:41:43.528105 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 20:41:43.528116 kernel: SELinux: policy capability always_check_network=0 Mar 17 20:41:43.528127 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 20:41:43.528139 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 20:41:43.528165 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 20:41:43.528681 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 20:41:43.528710 systemd[1]: Successfully loaded SELinux policy in 91.604ms. Mar 17 20:41:43.528729 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.959ms. Mar 17 20:41:43.529177 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 20:41:43.529200 systemd[1]: Detected virtualization kvm. Mar 17 20:41:43.529212 systemd[1]: Detected architecture x86-64. Mar 17 20:41:43.529224 systemd[1]: Detected first boot. Mar 17 20:41:43.529235 systemd[1]: Hostname set to . Mar 17 20:41:43.529251 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:41:43.529263 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 20:41:43.529275 systemd[1]: Populated /etc with preset unit settings. Mar 17 20:41:43.529302 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:41:43.529318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:41:43.529331 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:41:43.529346 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 20:41:43.529359 systemd[1]: Stopped initrd-switch-root.service. Mar 17 20:41:43.529371 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 20:41:43.529382 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 20:41:43.529394 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 20:41:43.529406 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 20:41:43.529417 systemd[1]: Created slice system-getty.slice. Mar 17 20:41:43.529428 systemd[1]: Created slice system-modprobe.slice. Mar 17 20:41:43.529443 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 20:41:43.529456 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 20:41:43.529472 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 20:41:43.529485 systemd[1]: Created slice user.slice. Mar 17 20:41:43.529497 systemd[1]: Started systemd-ask-password-console.path. Mar 17 20:41:43.529510 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 20:41:43.529523 systemd[1]: Set up automount boot.automount. Mar 17 20:41:43.529539 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 20:41:43.529554 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 20:41:43.529567 systemd[1]: Stopped target initrd-fs.target. Mar 17 20:41:43.529580 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 20:41:43.529592 systemd[1]: Reached target integritysetup.target. Mar 17 20:41:43.529605 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 20:41:43.529617 systemd[1]: Reached target remote-fs.target. Mar 17 20:41:43.529631 systemd[1]: Reached target slices.target. Mar 17 20:41:43.529645 systemd[1]: Reached target swap.target. Mar 17 20:41:43.529660 systemd[1]: Reached target torcx.target. Mar 17 20:41:43.529673 systemd[1]: Reached target veritysetup.target. Mar 17 20:41:43.529685 systemd[1]: Listening on systemd-coredump.socket. Mar 17 20:41:43.529698 systemd[1]: Listening on systemd-initctl.socket. Mar 17 20:41:43.529712 systemd[1]: Listening on systemd-networkd.socket. Mar 17 20:41:43.529724 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 20:41:43.529737 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 20:41:43.529749 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 20:41:43.529760 systemd[1]: Mounting dev-hugepages.mount... Mar 17 20:41:43.529773 systemd[1]: Mounting dev-mqueue.mount... Mar 17 20:41:43.529784 systemd[1]: Mounting media.mount... Mar 17 20:41:43.529796 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:43.529809 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 20:41:43.529820 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 20:41:43.529831 systemd[1]: Mounting tmp.mount... Mar 17 20:41:43.529842 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 20:41:43.529854 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:41:43.529865 systemd[1]: Starting kmod-static-nodes.service... Mar 17 20:41:43.529879 systemd[1]: Starting modprobe@configfs.service... Mar 17 20:41:43.529890 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:41:43.529902 systemd[1]: Starting modprobe@drm.service... Mar 17 20:41:43.529914 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:41:43.529926 systemd[1]: Starting modprobe@fuse.service... Mar 17 20:41:43.529937 systemd[1]: Starting modprobe@loop.service... Mar 17 20:41:43.529948 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 20:41:43.529960 kernel: fuse: init (API version 7.34) Mar 17 20:41:43.529971 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 20:41:43.529985 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 20:41:43.529996 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 20:41:43.530007 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 20:41:43.530019 systemd[1]: Stopped systemd-journald.service. Mar 17 20:41:43.530030 kernel: loop: module loaded Mar 17 20:41:43.530041 systemd[1]: Starting systemd-journald.service... Mar 17 20:41:43.530054 systemd[1]: Starting systemd-modules-load.service... Mar 17 20:41:43.530065 systemd[1]: Starting systemd-network-generator.service... Mar 17 20:41:43.530076 systemd[1]: Starting systemd-remount-fs.service... Mar 17 20:41:43.530219 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 20:41:43.530234 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 20:41:43.530246 systemd[1]: Stopped verity-setup.service. Mar 17 20:41:43.530258 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:43.530269 systemd[1]: Mounted dev-hugepages.mount. Mar 17 20:41:43.530335 systemd[1]: Mounted dev-mqueue.mount. Mar 17 20:41:43.530349 systemd[1]: Mounted media.mount. Mar 17 20:41:43.530364 systemd-journald[926]: Journal started Mar 17 20:41:43.530414 systemd-journald[926]: Runtime Journal (/run/log/journal/9573a8af9e71474da22923867e1a2b6e) is 8.0M, max 78.4M, 70.4M free. Mar 17 20:41:39.466000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 20:41:39.593000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 20:41:39.593000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 20:41:39.593000 audit: BPF prog-id=10 op=LOAD Mar 17 20:41:39.593000 audit: BPF prog-id=10 op=UNLOAD Mar 17 20:41:39.593000 audit: BPF prog-id=11 op=LOAD Mar 17 20:41:39.593000 audit: BPF prog-id=11 op=UNLOAD Mar 17 20:41:39.732000 audit[853]: AVC avc: denied { associate } for pid=853 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 20:41:39.732000 audit[853]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=836 pid=853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:41:39.732000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 20:41:39.736000 audit[853]: AVC avc: denied { associate } for pid=853 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 20:41:39.736000 audit[853]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=836 pid=853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:41:39.736000 audit: CWD cwd="/" Mar 17 20:41:39.736000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:39.736000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:39.736000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 20:41:43.319000 audit: BPF prog-id=12 op=LOAD Mar 17 20:41:43.319000 audit: BPF prog-id=3 op=UNLOAD Mar 17 20:41:43.319000 audit: BPF prog-id=13 op=LOAD Mar 17 20:41:43.319000 audit: BPF prog-id=14 op=LOAD Mar 17 20:41:43.319000 audit: BPF prog-id=4 op=UNLOAD Mar 17 20:41:43.319000 audit: BPF prog-id=5 op=UNLOAD Mar 17 20:41:43.320000 audit: BPF prog-id=15 op=LOAD Mar 17 20:41:43.320000 audit: BPF prog-id=12 op=UNLOAD Mar 17 20:41:43.320000 audit: BPF prog-id=16 op=LOAD Mar 17 20:41:43.320000 audit: BPF prog-id=17 op=LOAD Mar 17 20:41:43.320000 audit: BPF prog-id=13 op=UNLOAD Mar 17 20:41:43.320000 audit: BPF prog-id=14 op=UNLOAD Mar 17 20:41:43.321000 audit: BPF prog-id=18 op=LOAD Mar 17 20:41:43.321000 audit: BPF prog-id=15 op=UNLOAD Mar 17 20:41:43.321000 audit: BPF prog-id=19 op=LOAD Mar 17 20:41:43.321000 audit: BPF prog-id=20 op=LOAD Mar 17 20:41:43.321000 audit: BPF prog-id=16 op=UNLOAD Mar 17 20:41:43.321000 audit: BPF prog-id=17 op=UNLOAD Mar 17 20:41:43.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.329000 audit: BPF prog-id=18 op=UNLOAD Mar 17 20:41:43.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.536251 systemd[1]: Started systemd-journald.service. Mar 17 20:41:43.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.489000 audit: BPF prog-id=21 op=LOAD Mar 17 20:41:43.489000 audit: BPF prog-id=22 op=LOAD Mar 17 20:41:43.489000 audit: BPF prog-id=23 op=LOAD Mar 17 20:41:43.489000 audit: BPF prog-id=19 op=UNLOAD Mar 17 20:41:43.489000 audit: BPF prog-id=20 op=UNLOAD Mar 17 20:41:43.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.526000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 20:41:43.526000 audit[926]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd7a4a93a0 a2=4000 a3=7ffd7a4a943c items=0 ppid=1 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:41:43.526000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 20:41:43.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.729727 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:41:43.317689 systemd[1]: Queued start job for default target multi-user.target. Mar 17 20:41:39.730892 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 20:41:43.317702 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 20:41:39.730927 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 20:41:43.322479 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 20:41:39.730966 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 20:41:43.535005 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 20:41:43.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:39.730980 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 20:41:43.535552 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 20:41:39.731024 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 20:41:43.536098 systemd[1]: Mounted tmp.mount. Mar 17 20:41:39.731043 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 20:41:43.536822 systemd[1]: Finished kmod-static-nodes.service. Mar 17 20:41:39.731263 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 20:41:43.537634 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 20:41:39.731333 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 20:41:43.537760 systemd[1]: Finished modprobe@configfs.service. Mar 17 20:41:39.731350 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 20:41:43.538583 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:41:39.732260 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 20:41:43.538697 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:41:39.732322 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 20:41:39.732344 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 20:41:43.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.540808 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:41:39.732362 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 20:41:43.540925 systemd[1]: Finished modprobe@drm.service. Mar 17 20:41:39.732383 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 20:41:43.542252 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:41:39.732399 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 20:41:43.542437 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:41:42.937533 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 20:41:43.543138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 20:41:42.937831 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 20:41:43.543442 systemd[1]: Finished modprobe@fuse.service. Mar 17 20:41:42.940896 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 20:41:43.544132 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:41:42.942440 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 20:41:43.546405 systemd[1]: Finished modprobe@loop.service. Mar 17 20:41:42.942613 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 20:41:42.942780 /usr/lib/systemd/system-generators/torcx-generator[853]: time="2025-03-17T20:41:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 20:41:43.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.547338 systemd[1]: Finished systemd-modules-load.service. Mar 17 20:41:43.548046 systemd[1]: Finished systemd-network-generator.service. Mar 17 20:41:43.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.548745 systemd[1]: Finished systemd-remount-fs.service. Mar 17 20:41:43.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.549681 systemd[1]: Reached target network-pre.target. Mar 17 20:41:43.551388 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 20:41:43.557527 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 20:41:43.558079 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 20:41:43.561588 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 20:41:43.563145 systemd[1]: Starting systemd-journal-flush.service... Mar 17 20:41:43.563837 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:41:43.564862 systemd[1]: Starting systemd-random-seed.service... Mar 17 20:41:43.565534 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:41:43.566622 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:41:43.570631 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 20:41:43.571348 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 20:41:43.578894 systemd-journald[926]: Time spent on flushing to /var/log/journal/9573a8af9e71474da22923867e1a2b6e is 34.794ms for 1109 entries. Mar 17 20:41:43.578894 systemd-journald[926]: System Journal (/var/log/journal/9573a8af9e71474da22923867e1a2b6e) is 8.0M, max 584.8M, 576.8M free. Mar 17 20:41:43.637201 systemd-journald[926]: Received client request to flush runtime journal. Mar 17 20:41:43.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:43.595727 systemd[1]: Finished systemd-random-seed.service. Mar 17 20:41:43.596369 systemd[1]: Reached target first-boot-complete.target. Mar 17 20:41:43.608504 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:41:43.612170 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 20:41:43.613881 systemd[1]: Starting systemd-sysusers.service... Mar 17 20:41:43.633764 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 20:41:43.635461 systemd[1]: Starting systemd-udev-settle.service... Mar 17 20:41:43.638263 systemd[1]: Finished systemd-journal-flush.service. Mar 17 20:41:43.646113 udevadm[961]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 20:41:43.669459 systemd[1]: Finished systemd-sysusers.service. Mar 17 20:41:43.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.349852 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 20:41:44.366702 kernel: kauditd_printk_skb: 106 callbacks suppressed Mar 17 20:41:44.366806 kernel: audit: type=1130 audit(1742244104.350:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.354000 audit: BPF prog-id=24 op=LOAD Mar 17 20:41:44.368057 systemd[1]: Starting systemd-udevd.service... Mar 17 20:41:44.370917 kernel: audit: type=1334 audit(1742244104.354:146): prog-id=24 op=LOAD Mar 17 20:41:44.371035 kernel: audit: type=1334 audit(1742244104.366:147): prog-id=25 op=LOAD Mar 17 20:41:44.371078 kernel: audit: type=1334 audit(1742244104.366:148): prog-id=7 op=UNLOAD Mar 17 20:41:44.371117 kernel: audit: type=1334 audit(1742244104.366:149): prog-id=8 op=UNLOAD Mar 17 20:41:44.366000 audit: BPF prog-id=25 op=LOAD Mar 17 20:41:44.366000 audit: BPF prog-id=7 op=UNLOAD Mar 17 20:41:44.366000 audit: BPF prog-id=8 op=UNLOAD Mar 17 20:41:44.415635 systemd-udevd[963]: Using default interface naming scheme 'v252'. Mar 17 20:41:44.466427 systemd[1]: Started systemd-udevd.service. Mar 17 20:41:44.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.489351 kernel: audit: type=1130 audit(1742244104.467:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.500486 systemd[1]: Starting systemd-networkd.service... Mar 17 20:41:44.498000 audit: BPF prog-id=26 op=LOAD Mar 17 20:41:44.508315 kernel: audit: type=1334 audit(1742244104.498:151): prog-id=26 op=LOAD Mar 17 20:41:44.514000 audit: BPF prog-id=27 op=LOAD Mar 17 20:41:44.520324 kernel: audit: type=1334 audit(1742244104.514:152): prog-id=27 op=LOAD Mar 17 20:41:44.520466 systemd[1]: Starting systemd-userdbd.service... Mar 17 20:41:44.514000 audit: BPF prog-id=28 op=LOAD Mar 17 20:41:44.514000 audit: BPF prog-id=29 op=LOAD Mar 17 20:41:44.525465 kernel: audit: type=1334 audit(1742244104.514:153): prog-id=28 op=LOAD Mar 17 20:41:44.525508 kernel: audit: type=1334 audit(1742244104.514:154): prog-id=29 op=LOAD Mar 17 20:41:44.549134 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 20:41:44.575252 systemd[1]: Started systemd-userdbd.service. Mar 17 20:41:44.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.598807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 20:41:44.698324 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 20:41:44.715330 kernel: ACPI: button: Power Button [PWRF] Mar 17 20:41:44.701000 audit[966]: AVC avc: denied { confidentiality } for pid=966 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 20:41:44.701000 audit[966]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c0087e1060 a1=338ac a2=7faf11fa1bc5 a3=5 items=110 ppid=963 pid=966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:41:44.701000 audit: CWD cwd="/" Mar 17 20:41:44.701000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=1 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=2 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=3 name=(null) inode=13662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=4 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=5 name=(null) inode=13663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=6 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=7 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=8 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=9 name=(null) inode=13665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=10 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=11 name=(null) inode=13666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=12 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=13 name=(null) inode=13667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=14 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=15 name=(null) inode=13668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=16 name=(null) inode=13664 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=17 name=(null) inode=13669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=18 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=19 name=(null) inode=13670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=20 name=(null) inode=13670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=21 name=(null) inode=13671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=22 name=(null) inode=13670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=23 name=(null) inode=13672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=24 name=(null) inode=13670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=25 name=(null) inode=13673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=26 name=(null) inode=13670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=27 name=(null) inode=13674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=28 name=(null) inode=13670 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=29 name=(null) inode=13675 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=30 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=31 name=(null) inode=13676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=32 name=(null) inode=13676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=33 name=(null) inode=13677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=34 name=(null) inode=13676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=35 name=(null) inode=13678 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=36 name=(null) inode=13676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=37 name=(null) inode=13679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=38 name=(null) inode=13676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=39 name=(null) inode=13680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=40 name=(null) inode=13676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=41 name=(null) inode=13681 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=42 name=(null) inode=13661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=43 name=(null) inode=13682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=44 name=(null) inode=13682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=45 name=(null) inode=13683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=46 name=(null) inode=13682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=47 name=(null) inode=13684 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=48 name=(null) inode=13682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=49 name=(null) inode=13685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=50 name=(null) inode=13682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=51 name=(null) inode=13686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=52 name=(null) inode=13682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=53 name=(null) inode=13687 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=55 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=56 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=57 name=(null) inode=13689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=58 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=59 name=(null) inode=13690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=60 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=61 name=(null) inode=13691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=62 name=(null) inode=13691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=63 name=(null) inode=13692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=64 name=(null) inode=13691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=65 name=(null) inode=13693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=66 name=(null) inode=13691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=67 name=(null) inode=13694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=68 name=(null) inode=13691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=69 name=(null) inode=13695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=70 name=(null) inode=13691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=71 name=(null) inode=13696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=72 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=73 name=(null) inode=13697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=74 name=(null) inode=13697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=75 name=(null) inode=13698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=76 name=(null) inode=13697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=77 name=(null) inode=13699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=78 name=(null) inode=13697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=79 name=(null) inode=13700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=80 name=(null) inode=13697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=81 name=(null) inode=13701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=82 name=(null) inode=13697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=83 name=(null) inode=13702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=84 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=85 name=(null) inode=13703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=86 name=(null) inode=13703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=87 name=(null) inode=13704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=88 name=(null) inode=13703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=89 name=(null) inode=13705 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=90 name=(null) inode=13703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=91 name=(null) inode=13706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=92 name=(null) inode=13703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=93 name=(null) inode=13707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=94 name=(null) inode=13703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=95 name=(null) inode=13708 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=96 name=(null) inode=13688 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=97 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=98 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=99 name=(null) inode=13710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=100 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=101 name=(null) inode=13711 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=102 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=103 name=(null) inode=13712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=104 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=105 name=(null) inode=13713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=106 name=(null) inode=13709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=107 name=(null) inode=13714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PATH item=109 name=(null) inode=13715 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:41:44.701000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 20:41:44.782304 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 20:41:44.792310 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 20:41:44.814310 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 20:41:44.832765 systemd[1]: Finished systemd-udev-settle.service. Mar 17 20:41:44.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.836730 systemd[1]: Starting lvm2-activation-early.service... Mar 17 20:41:44.856536 systemd-networkd[983]: lo: Link UP Mar 17 20:41:44.856562 systemd-networkd[983]: lo: Gained carrier Mar 17 20:41:44.857871 systemd-networkd[983]: Enumeration completed Mar 17 20:41:44.858080 systemd-networkd[983]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:41:44.858137 systemd[1]: Started systemd-networkd.service. Mar 17 20:41:44.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.861816 systemd-networkd[983]: eth0: Link UP Mar 17 20:41:44.861826 systemd-networkd[983]: eth0: Gained carrier Mar 17 20:41:44.872864 lvm[997]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:41:44.881391 systemd-networkd[983]: eth0: DHCPv4 address 172.24.4.253/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 20:41:44.903502 systemd[1]: Finished lvm2-activation-early.service. Mar 17 20:41:44.904912 systemd[1]: Reached target cryptsetup.target. Mar 17 20:41:44.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.908258 systemd[1]: Starting lvm2-activation.service... Mar 17 20:41:44.912178 lvm[998]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:41:44.939987 systemd[1]: Finished lvm2-activation.service. Mar 17 20:41:44.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:44.941430 systemd[1]: Reached target local-fs-pre.target. Mar 17 20:41:44.942634 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 20:41:44.942697 systemd[1]: Reached target local-fs.target. Mar 17 20:41:44.943876 systemd[1]: Reached target machines.target. Mar 17 20:41:44.947522 systemd[1]: Starting ldconfig.service... Mar 17 20:41:44.950450 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:41:44.950541 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:44.952671 systemd[1]: Starting systemd-boot-update.service... Mar 17 20:41:44.956775 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 20:41:44.961902 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 20:41:44.967686 systemd[1]: Starting systemd-sysext.service... Mar 17 20:41:45.001772 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 20:41:45.004149 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1000 (bootctl) Mar 17 20:41:45.005834 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 20:41:45.031697 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 20:41:45.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.060915 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 20:41:45.061320 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 20:41:45.119364 kernel: loop0: detected capacity change from 0 to 218376 Mar 17 20:41:45.601531 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 20:41:45.603777 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 20:41:45.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.641359 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 20:41:45.675351 kernel: loop1: detected capacity change from 0 to 218376 Mar 17 20:41:45.715272 systemd-fsck[1010]: fsck.fat 4.2 (2021-01-31) Mar 17 20:41:45.715272 systemd-fsck[1010]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 20:41:45.719891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 20:41:45.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.726109 systemd[1]: Mounting boot.mount... Mar 17 20:41:45.749135 (sd-sysext)[1015]: Using extensions 'kubernetes'. Mar 17 20:41:45.751828 (sd-sysext)[1015]: Merged extensions into '/usr'. Mar 17 20:41:45.780369 systemd[1]: Mounted boot.mount. Mar 17 20:41:45.790029 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:45.794208 systemd[1]: Mounting usr-share-oem.mount... Mar 17 20:41:45.796211 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:41:45.802296 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:41:45.805655 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:41:45.810549 systemd[1]: Starting modprobe@loop.service... Mar 17 20:41:45.812067 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:41:45.812219 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:45.812438 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:45.816494 systemd[1]: Mounted usr-share-oem.mount. Mar 17 20:41:45.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.817858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:41:45.818057 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:41:45.819005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:41:45.819154 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:41:45.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.820209 systemd[1]: Finished systemd-sysext.service. Mar 17 20:41:45.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.821411 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:41:45.821565 systemd[1]: Finished modprobe@loop.service. Mar 17 20:41:45.825495 systemd[1]: Starting ensure-sysext.service... Mar 17 20:41:45.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:45.828404 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:41:45.828502 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:41:45.830153 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 20:41:45.831674 systemd[1]: Finished systemd-boot-update.service. Mar 17 20:41:45.836469 systemd[1]: Reloading. Mar 17 20:41:45.871395 systemd-tmpfiles[1023]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 20:41:45.889339 systemd-tmpfiles[1023]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 20:41:45.905782 systemd-tmpfiles[1023]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 20:41:45.929756 /usr/lib/systemd/system-generators/torcx-generator[1042]: time="2025-03-17T20:41:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:41:45.929789 /usr/lib/systemd/system-generators/torcx-generator[1042]: time="2025-03-17T20:41:45Z" level=info msg="torcx already run" Mar 17 20:41:46.083422 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:41:46.083878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:41:46.138807 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:41:46.231000 audit: BPF prog-id=30 op=LOAD Mar 17 20:41:46.231000 audit: BPF prog-id=27 op=UNLOAD Mar 17 20:41:46.232000 audit: BPF prog-id=31 op=LOAD Mar 17 20:41:46.232000 audit: BPF prog-id=32 op=LOAD Mar 17 20:41:46.232000 audit: BPF prog-id=28 op=UNLOAD Mar 17 20:41:46.232000 audit: BPF prog-id=29 op=UNLOAD Mar 17 20:41:46.233000 audit: BPF prog-id=33 op=LOAD Mar 17 20:41:46.234000 audit: BPF prog-id=34 op=LOAD Mar 17 20:41:46.234000 audit: BPF prog-id=24 op=UNLOAD Mar 17 20:41:46.234000 audit: BPF prog-id=25 op=UNLOAD Mar 17 20:41:46.235000 audit: BPF prog-id=35 op=LOAD Mar 17 20:41:46.235000 audit: BPF prog-id=21 op=UNLOAD Mar 17 20:41:46.235000 audit: BPF prog-id=36 op=LOAD Mar 17 20:41:46.235000 audit: BPF prog-id=37 op=LOAD Mar 17 20:41:46.235000 audit: BPF prog-id=22 op=UNLOAD Mar 17 20:41:46.235000 audit: BPF prog-id=23 op=UNLOAD Mar 17 20:41:46.236000 audit: BPF prog-id=38 op=LOAD Mar 17 20:41:46.236000 audit: BPF prog-id=26 op=UNLOAD Mar 17 20:41:46.249331 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 20:41:46.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.255160 systemd[1]: Starting audit-rules.service... Mar 17 20:41:46.257789 systemd[1]: Starting clean-ca-certificates.service... Mar 17 20:41:46.260919 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 20:41:46.263000 audit: BPF prog-id=39 op=LOAD Mar 17 20:41:46.265843 systemd[1]: Starting systemd-resolved.service... Mar 17 20:41:46.269000 audit: BPF prog-id=40 op=LOAD Mar 17 20:41:46.271255 systemd[1]: Starting systemd-timesyncd.service... Mar 17 20:41:46.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.274797 systemd[1]: Starting systemd-update-utmp.service... Mar 17 20:41:46.275960 systemd[1]: Finished clean-ca-certificates.service. Mar 17 20:41:46.281740 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:41:46.286159 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:46.286389 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.287813 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:41:46.289739 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:41:46.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.293584 systemd[1]: Starting modprobe@loop.service... Mar 17 20:41:46.294387 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.294531 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:46.294683 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:41:46.294792 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:46.296009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:41:46.296175 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:41:46.299524 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:46.299767 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.303246 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:41:46.305697 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.305835 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:46.305965 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:41:46.306064 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:46.307019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:41:46.307148 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:41:46.308098 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:41:46.308210 systemd[1]: Finished modprobe@loop.service. Mar 17 20:41:46.309053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:41:46.309159 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:41:46.310133 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:41:46.310250 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.315695 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:46.315928 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.318272 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:41:46.319000 audit[1096]: SYSTEM_BOOT pid=1096 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.321685 systemd[1]: Starting modprobe@drm.service... Mar 17 20:41:46.323470 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:41:46.326167 systemd[1]: Starting modprobe@loop.service... Mar 17 20:41:46.326820 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.326873 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:46.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.334853 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 20:41:46.335516 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:41:46.335566 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:41:46.336336 systemd[1]: Finished ensure-sysext.service. Mar 17 20:41:46.337127 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 20:41:46.337893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:41:46.338018 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:41:46.338755 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:41:46.338874 systemd[1]: Finished modprobe@drm.service. Mar 17 20:41:46.339652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:41:46.339770 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:41:46.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.345946 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:41:46.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.348026 systemd[1]: Finished systemd-update-utmp.service. Mar 17 20:41:46.349899 ldconfig[999]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 20:41:46.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.356207 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:41:46.356391 systemd[1]: Finished modprobe@loop.service. Mar 17 20:41:46.357007 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.368212 systemd[1]: Finished ldconfig.service. Mar 17 20:41:46.370230 systemd[1]: Starting systemd-update-done.service... Mar 17 20:41:46.377525 systemd[1]: Finished systemd-update-done.service. Mar 17 20:41:46.379521 systemd-networkd[983]: eth0: Gained IPv6LL Mar 17 20:41:46.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.386155 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 20:41:46.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:41:46.387000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 20:41:46.387000 audit[1119]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa8767da0 a2=420 a3=0 items=0 ppid=1090 pid=1119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:41:46.387000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 20:41:46.388964 augenrules[1119]: No rules Mar 17 20:41:46.389602 systemd[1]: Finished audit-rules.service. Mar 17 20:41:46.397943 systemd[1]: Started systemd-timesyncd.service. Mar 17 20:41:46.398690 systemd[1]: Reached target time-set.target. Mar 17 20:41:46.415446 systemd-resolved[1093]: Positive Trust Anchors: Mar 17 20:41:46.415804 systemd-resolved[1093]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:41:46.415898 systemd-resolved[1093]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 20:41:46.423914 systemd-resolved[1093]: Using system hostname 'ci-3510-3-7-0-2f3ee5d9b1.novalocal'. Mar 17 20:41:46.425675 systemd[1]: Started systemd-resolved.service. Mar 17 20:41:46.426321 systemd[1]: Reached target network.target. Mar 17 20:41:46.426781 systemd[1]: Reached target network-online.target. Mar 17 20:41:46.427230 systemd[1]: Reached target nss-lookup.target. Mar 17 20:41:46.427692 systemd[1]: Reached target sysinit.target. Mar 17 20:41:46.428224 systemd[1]: Started motdgen.path. Mar 17 20:41:46.428717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 20:41:46.429460 systemd[1]: Started logrotate.timer. Mar 17 20:41:46.429975 systemd[1]: Started mdadm.timer. Mar 17 20:41:46.430422 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 20:41:46.430891 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 20:41:46.430924 systemd[1]: Reached target paths.target. Mar 17 20:41:46.431406 systemd[1]: Reached target timers.target. Mar 17 20:41:46.432181 systemd[1]: Listening on dbus.socket. Mar 17 20:41:46.433911 systemd[1]: Starting docker.socket... Mar 17 20:41:46.437950 systemd[1]: Listening on sshd.socket. Mar 17 20:41:46.438584 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:46.439072 systemd[1]: Listening on docker.socket. Mar 17 20:41:46.439628 systemd[1]: Reached target sockets.target. Mar 17 20:41:46.440092 systemd[1]: Reached target basic.target. Mar 17 20:41:46.440628 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.440661 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 20:41:46.441838 systemd[1]: Starting containerd.service... Mar 17 20:41:46.444594 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 20:41:46.446355 systemd[1]: Starting dbus.service... Mar 17 20:41:46.448645 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 20:41:46.451035 systemd[1]: Starting extend-filesystems.service... Mar 17 20:41:46.452318 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 20:41:46.456821 systemd[1]: Starting kubelet.service... Mar 17 20:41:46.463641 jq[1133]: false Mar 17 20:41:46.459108 systemd[1]: Starting motdgen.service... Mar 17 20:41:46.461026 systemd[1]: Starting prepare-helm.service... Mar 17 20:41:46.464464 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 20:41:46.467488 systemd[1]: Starting sshd-keygen.service... Mar 17 20:41:46.475187 systemd[1]: Starting systemd-logind.service... Mar 17 20:41:46.475859 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:41:46.475922 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 20:41:46.477627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 20:41:46.480418 systemd[1]: Starting update-engine.service... Mar 17 20:41:46.482950 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 20:41:46.486553 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 20:41:46.486765 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 20:41:46.494546 jq[1146]: true Mar 17 20:41:46.509575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 20:41:46.509795 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 20:41:46.515044 systemd[1]: Created slice system-sshd.slice. Mar 17 20:41:46.519432 jq[1152]: true Mar 17 20:41:47.957724 systemd-resolved[1093]: Clock change detected. Flushing caches. Mar 17 20:41:47.957916 systemd-timesyncd[1095]: Contacted time server 147.135.15.159:123 (0.flatcar.pool.ntp.org). Mar 17 20:41:47.957979 systemd-timesyncd[1095]: Initial clock synchronization to Mon 2025-03-17 20:41:47.957671 UTC. Mar 17 20:41:47.962669 extend-filesystems[1134]: Found loop1 Mar 17 20:41:47.962669 extend-filesystems[1134]: Found vda Mar 17 20:41:47.962669 extend-filesystems[1134]: Found vda1 Mar 17 20:41:47.962669 extend-filesystems[1134]: Found vda2 Mar 17 20:41:47.962669 extend-filesystems[1134]: Found vda3 Mar 17 20:41:47.962669 extend-filesystems[1134]: Found usr Mar 17 20:41:47.962669 extend-filesystems[1134]: Found vda4 Mar 17 20:41:47.962669 extend-filesystems[1134]: Found vda6 Mar 17 20:41:47.993706 extend-filesystems[1134]: Found vda7 Mar 17 20:41:47.993706 extend-filesystems[1134]: Found vda9 Mar 17 20:41:47.993706 extend-filesystems[1134]: Checking size of /dev/vda9 Mar 17 20:41:48.008882 tar[1150]: linux-amd64/LICENSE Mar 17 20:41:48.008882 tar[1150]: linux-amd64/helm Mar 17 20:41:48.000261 systemd[1]: Started dbus.service. Mar 17 20:41:48.000034 dbus-daemon[1130]: [system] SELinux support is enabled Mar 17 20:41:48.002924 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 20:41:48.002964 systemd[1]: Reached target system-config.target. Mar 17 20:41:48.003498 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 20:41:48.020822 extend-filesystems[1134]: Resized partition /dev/vda9 Mar 17 20:41:48.003516 systemd[1]: Reached target user-config.target. Mar 17 20:41:48.026580 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 20:41:48.026794 systemd[1]: Finished motdgen.service. Mar 17 20:41:48.038472 extend-filesystems[1182]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 20:41:48.056282 env[1151]: time="2025-03-17T20:41:48.056215583Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 20:41:48.074751 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 17 20:41:48.079655 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 17 20:41:48.140584 env[1151]: time="2025-03-17T20:41:48.116559067Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 20:41:48.141784 update_engine[1144]: I0317 20:41:48.140440 1144 main.cc:92] Flatcar Update Engine starting Mar 17 20:41:48.144542 extend-filesystems[1182]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 20:41:48.144542 extend-filesystems[1182]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 20:41:48.144542 extend-filesystems[1182]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 17 20:41:48.147882 extend-filesystems[1134]: Resized filesystem in /dev/vda9 Mar 17 20:41:48.144962 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 20:41:48.145142 systemd[1]: Finished extend-filesystems.service. Mar 17 20:41:48.149463 systemd[1]: Started update-engine.service. Mar 17 20:41:48.149684 update_engine[1144]: I0317 20:41:48.149483 1144 update_check_scheduler.cc:74] Next update check in 10m41s Mar 17 20:41:48.153172 systemd[1]: Started locksmithd.service. Mar 17 20:41:48.169597 bash[1187]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:41:48.170409 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 20:41:48.172388 env[1151]: time="2025-03-17T20:41:48.172310218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:41:48.174376 env[1151]: time="2025-03-17T20:41:48.174333733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:41:48.174423 env[1151]: time="2025-03-17T20:41:48.174389608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:41:48.174721 env[1151]: time="2025-03-17T20:41:48.174692025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:41:48.174721 env[1151]: time="2025-03-17T20:41:48.174718385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 20:41:48.174794 env[1151]: time="2025-03-17T20:41:48.174752819Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 20:41:48.174794 env[1151]: time="2025-03-17T20:41:48.174767737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 20:41:48.174906 env[1151]: time="2025-03-17T20:41:48.174880238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:41:48.175302 env[1151]: time="2025-03-17T20:41:48.175274528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:41:48.175476 env[1151]: time="2025-03-17T20:41:48.175428486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:41:48.175515 env[1151]: time="2025-03-17T20:41:48.175473501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 20:41:48.175579 env[1151]: time="2025-03-17T20:41:48.175552288Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 20:41:48.175579 env[1151]: time="2025-03-17T20:41:48.175573418Z" level=info msg="metadata content store policy set" policy=shared Mar 17 20:41:48.178232 systemd-logind[1140]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 20:41:48.178261 systemd-logind[1140]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 20:41:48.180036 systemd-logind[1140]: New seat seat0. Mar 17 20:41:48.188546 systemd[1]: Started systemd-logind.service. Mar 17 20:41:48.192011 env[1151]: time="2025-03-17T20:41:48.191961464Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 20:41:48.192064 env[1151]: time="2025-03-17T20:41:48.192028510Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 20:41:48.192064 env[1151]: time="2025-03-17T20:41:48.192049960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 20:41:48.192174 env[1151]: time="2025-03-17T20:41:48.192151871Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192215 env[1151]: time="2025-03-17T20:41:48.192196475Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192243 env[1151]: time="2025-03-17T20:41:48.192216512Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192243 env[1151]: time="2025-03-17T20:41:48.192233745Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192292 env[1151]: time="2025-03-17T20:41:48.192250586Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192322 env[1151]: time="2025-03-17T20:41:48.192300159Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192322 env[1151]: time="2025-03-17T20:41:48.192319085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192376 env[1151]: time="2025-03-17T20:41:48.192335185Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.192404 env[1151]: time="2025-03-17T20:41:48.192377394Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 20:41:48.192590 env[1151]: time="2025-03-17T20:41:48.192564525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.192707363Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193051719Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193102203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193118965Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193185319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193203744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193219283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193253647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193269948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193284916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193299002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193330812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193352022Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193542930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.194962 env[1151]: time="2025-03-17T20:41:48.193582164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.196357 env[1151]: time="2025-03-17T20:41:48.193598584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.196357 env[1151]: time="2025-03-17T20:41:48.193612320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 20:41:48.196357 env[1151]: time="2025-03-17T20:41:48.193653597Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 20:41:48.196357 env[1151]: time="2025-03-17T20:41:48.193669808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 20:41:48.196357 env[1151]: time="2025-03-17T20:41:48.193692140Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 20:41:48.196357 env[1151]: time="2025-03-17T20:41:48.193752102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 20:41:48.195545 systemd[1]: Started containerd.service. Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.194025495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.194109863Z" level=info msg="Connect containerd service" Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.194177811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.195117343Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.195365759Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.195405744Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 20:41:48.196608 env[1151]: time="2025-03-17T20:41:48.195455838Z" level=info msg="containerd successfully booted in 0.140106s" Mar 17 20:41:48.199979 env[1151]: time="2025-03-17T20:41:48.196893764Z" level=info msg="Start subscribing containerd event" Mar 17 20:41:48.199979 env[1151]: time="2025-03-17T20:41:48.196972382Z" level=info msg="Start recovering state" Mar 17 20:41:48.199979 env[1151]: time="2025-03-17T20:41:48.197059595Z" level=info msg="Start event monitor" Mar 17 20:41:48.199979 env[1151]: time="2025-03-17T20:41:48.197075145Z" level=info msg="Start snapshots syncer" Mar 17 20:41:48.199979 env[1151]: time="2025-03-17T20:41:48.197087959Z" level=info msg="Start cni network conf syncer for default" Mar 17 20:41:48.199979 env[1151]: time="2025-03-17T20:41:48.197097667Z" level=info msg="Start streaming server" Mar 17 20:41:48.680071 tar[1150]: linux-amd64/README.md Mar 17 20:41:48.684776 systemd[1]: Finished prepare-helm.service. Mar 17 20:41:49.166268 locksmithd[1192]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 20:41:49.412694 sshd_keygen[1161]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 20:41:49.447412 systemd[1]: Finished sshd-keygen.service. Mar 17 20:41:49.449728 systemd[1]: Starting issuegen.service... Mar 17 20:41:49.451476 systemd[1]: Started sshd@0-172.24.4.253:22-172.24.4.1:45588.service. Mar 17 20:41:49.460115 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 20:41:49.460283 systemd[1]: Finished issuegen.service. Mar 17 20:41:49.462291 systemd[1]: Starting systemd-user-sessions.service... Mar 17 20:41:49.471177 systemd[1]: Finished systemd-user-sessions.service. Mar 17 20:41:49.473358 systemd[1]: Started getty@tty1.service. Mar 17 20:41:49.475339 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 20:41:49.476106 systemd[1]: Reached target getty.target. Mar 17 20:41:50.522766 sshd[1208]: Accepted publickey for core from 172.24.4.1 port 45588 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:41:50.530066 sshd[1208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:41:50.560932 systemd[1]: Created slice user-500.slice. Mar 17 20:41:50.565242 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 20:41:50.571707 systemd-logind[1140]: New session 1 of user core. Mar 17 20:41:50.593311 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 20:41:50.596981 systemd[1]: Starting user@500.service... Mar 17 20:41:50.607002 (systemd)[1216]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:41:50.726835 systemd[1216]: Queued start job for default target default.target. Mar 17 20:41:50.727418 systemd[1216]: Reached target paths.target. Mar 17 20:41:50.727440 systemd[1216]: Reached target sockets.target. Mar 17 20:41:50.727456 systemd[1216]: Reached target timers.target. Mar 17 20:41:50.727470 systemd[1216]: Reached target basic.target. Mar 17 20:41:50.727515 systemd[1216]: Reached target default.target. Mar 17 20:41:50.727543 systemd[1216]: Startup finished in 104ms. Mar 17 20:41:50.728119 systemd[1]: Started user@500.service. Mar 17 20:41:50.731355 systemd[1]: Started session-1.scope. Mar 17 20:41:50.957918 systemd[1]: Started kubelet.service. Mar 17 20:41:51.212401 systemd[1]: Started sshd@1-172.24.4.253:22-172.24.4.1:45594.service. Mar 17 20:41:52.402354 kubelet[1225]: E0317 20:41:52.401484 1225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:41:52.406682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:41:52.407085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:41:52.407868 systemd[1]: kubelet.service: Consumed 2.252s CPU time. Mar 17 20:41:53.346842 sshd[1230]: Accepted publickey for core from 172.24.4.1 port 45594 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:41:53.349014 sshd[1230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:41:53.360312 systemd-logind[1140]: New session 2 of user core. Mar 17 20:41:53.361253 systemd[1]: Started session-2.scope. Mar 17 20:41:53.992205 sshd[1230]: pam_unix(sshd:session): session closed for user core Mar 17 20:41:54.000831 systemd[1]: Started sshd@2-172.24.4.253:22-172.24.4.1:40636.service. Mar 17 20:41:54.004776 systemd[1]: sshd@1-172.24.4.253:22-172.24.4.1:45594.service: Deactivated successfully. Mar 17 20:41:54.006978 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 20:41:54.008921 systemd-logind[1140]: Session 2 logged out. Waiting for processes to exit. Mar 17 20:41:54.011717 systemd-logind[1140]: Removed session 2. Mar 17 20:41:55.039246 coreos-metadata[1129]: Mar 17 20:41:55.039 WARN failed to locate config-drive, using the metadata service API instead Mar 17 20:41:55.198287 coreos-metadata[1129]: Mar 17 20:41:55.198 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 20:41:55.354778 sshd[1239]: Accepted publickey for core from 172.24.4.1 port 40636 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:41:55.356323 sshd[1239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:41:55.365988 systemd-logind[1140]: New session 3 of user core. Mar 17 20:41:55.367308 systemd[1]: Started session-3.scope. Mar 17 20:41:55.498425 coreos-metadata[1129]: Mar 17 20:41:55.498 INFO Fetch successful Mar 17 20:41:55.498425 coreos-metadata[1129]: Mar 17 20:41:55.498 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 20:41:55.510384 coreos-metadata[1129]: Mar 17 20:41:55.510 INFO Fetch successful Mar 17 20:41:55.516041 unknown[1129]: wrote ssh authorized keys file for user: core Mar 17 20:41:55.549804 update-ssh-keys[1245]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:41:55.551700 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 20:41:55.552718 systemd[1]: Reached target multi-user.target. Mar 17 20:41:55.555736 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 20:41:55.575450 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 20:41:55.575829 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 20:41:55.576245 systemd[1]: Startup finished in 1.018s (kernel) + 9.674s (initrd) + 14.792s (userspace) = 25.484s. Mar 17 20:41:55.997720 sshd[1239]: pam_unix(sshd:session): session closed for user core Mar 17 20:41:56.002816 systemd[1]: sshd@2-172.24.4.253:22-172.24.4.1:40636.service: Deactivated successfully. Mar 17 20:41:56.004436 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 20:41:56.005883 systemd-logind[1140]: Session 3 logged out. Waiting for processes to exit. Mar 17 20:41:56.007790 systemd-logind[1140]: Removed session 3. Mar 17 20:42:02.534394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 20:42:02.534861 systemd[1]: Stopped kubelet.service. Mar 17 20:42:02.535016 systemd[1]: kubelet.service: Consumed 2.252s CPU time. Mar 17 20:42:02.537671 systemd[1]: Starting kubelet.service... Mar 17 20:42:02.833419 systemd[1]: Started kubelet.service. Mar 17 20:42:02.997310 kubelet[1253]: E0317 20:42:02.997234 1253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:42:03.004994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:42:03.005281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:42:06.008740 systemd[1]: Started sshd@3-172.24.4.253:22-172.24.4.1:34076.service. Mar 17 20:42:07.182804 sshd[1260]: Accepted publickey for core from 172.24.4.1 port 34076 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:42:07.185973 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:42:07.198503 systemd-logind[1140]: New session 4 of user core. Mar 17 20:42:07.199311 systemd[1]: Started session-4.scope. Mar 17 20:42:07.974354 sshd[1260]: pam_unix(sshd:session): session closed for user core Mar 17 20:42:07.980236 systemd[1]: Started sshd@4-172.24.4.253:22-172.24.4.1:34090.service. Mar 17 20:42:07.986472 systemd[1]: sshd@3-172.24.4.253:22-172.24.4.1:34076.service: Deactivated successfully. Mar 17 20:42:07.988100 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 20:42:07.990997 systemd-logind[1140]: Session 4 logged out. Waiting for processes to exit. Mar 17 20:42:07.993121 systemd-logind[1140]: Removed session 4. Mar 17 20:42:09.425586 sshd[1265]: Accepted publickey for core from 172.24.4.1 port 34090 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:42:09.428223 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:42:09.438787 systemd-logind[1140]: New session 5 of user core. Mar 17 20:42:09.439839 systemd[1]: Started session-5.scope. Mar 17 20:42:10.163716 sshd[1265]: pam_unix(sshd:session): session closed for user core Mar 17 20:42:10.170852 systemd[1]: Started sshd@5-172.24.4.253:22-172.24.4.1:34106.service. Mar 17 20:42:10.174022 systemd[1]: sshd@4-172.24.4.253:22-172.24.4.1:34090.service: Deactivated successfully. Mar 17 20:42:10.175680 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 20:42:10.177762 systemd-logind[1140]: Session 5 logged out. Waiting for processes to exit. Mar 17 20:42:10.179599 systemd-logind[1140]: Removed session 5. Mar 17 20:42:11.585153 sshd[1271]: Accepted publickey for core from 172.24.4.1 port 34106 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:42:11.588431 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:42:11.599397 systemd[1]: Started session-6.scope. Mar 17 20:42:11.601102 systemd-logind[1140]: New session 6 of user core. Mar 17 20:42:12.422737 sshd[1271]: pam_unix(sshd:session): session closed for user core Mar 17 20:42:12.430191 systemd[1]: Started sshd@6-172.24.4.253:22-172.24.4.1:34116.service. Mar 17 20:42:12.431817 systemd[1]: sshd@5-172.24.4.253:22-172.24.4.1:34106.service: Deactivated successfully. Mar 17 20:42:12.433338 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 20:42:12.437336 systemd-logind[1140]: Session 6 logged out. Waiting for processes to exit. Mar 17 20:42:12.439986 systemd-logind[1140]: Removed session 6. Mar 17 20:42:13.034353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 20:42:13.034824 systemd[1]: Stopped kubelet.service. Mar 17 20:42:13.038044 systemd[1]: Starting kubelet.service... Mar 17 20:42:13.203273 systemd[1]: Started kubelet.service. Mar 17 20:42:13.270188 kubelet[1284]: E0317 20:42:13.269725 1284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:42:13.273172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:42:13.273428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:42:13.964068 sshd[1277]: Accepted publickey for core from 172.24.4.1 port 34116 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:42:13.966563 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:42:13.976466 systemd[1]: Started session-7.scope. Mar 17 20:42:13.977736 systemd-logind[1140]: New session 7 of user core. Mar 17 20:42:14.458191 sudo[1291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 20:42:14.458761 sudo[1291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 20:42:14.507740 systemd[1]: Starting docker.service... Mar 17 20:42:14.563236 env[1301]: time="2025-03-17T20:42:14.563157485Z" level=info msg="Starting up" Mar 17 20:42:14.565940 env[1301]: time="2025-03-17T20:42:14.565902894Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 20:42:14.566178 env[1301]: time="2025-03-17T20:42:14.566146942Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 20:42:14.566367 env[1301]: time="2025-03-17T20:42:14.566328422Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 20:42:14.566504 env[1301]: time="2025-03-17T20:42:14.566475338Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 20:42:14.570211 env[1301]: time="2025-03-17T20:42:14.570168094Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 20:42:14.570380 env[1301]: time="2025-03-17T20:42:14.570343453Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 20:42:14.570549 env[1301]: time="2025-03-17T20:42:14.570509955Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 20:42:14.570733 env[1301]: time="2025-03-17T20:42:14.570700853Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 20:42:14.592246 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1878309665-merged.mount: Deactivated successfully. Mar 17 20:42:14.657895 env[1301]: time="2025-03-17T20:42:14.657832999Z" level=info msg="Loading containers: start." Mar 17 20:42:14.873657 kernel: Initializing XFRM netlink socket Mar 17 20:42:14.929472 env[1301]: time="2025-03-17T20:42:14.929438615Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 20:42:15.032305 systemd-networkd[983]: docker0: Link UP Mar 17 20:42:15.057102 env[1301]: time="2025-03-17T20:42:15.057047676Z" level=info msg="Loading containers: done." Mar 17 20:42:15.082666 env[1301]: time="2025-03-17T20:42:15.082512946Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 20:42:15.083024 env[1301]: time="2025-03-17T20:42:15.082849457Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 20:42:15.083124 env[1301]: time="2025-03-17T20:42:15.083022502Z" level=info msg="Daemon has completed initialization" Mar 17 20:42:15.118899 systemd[1]: Started docker.service. Mar 17 20:42:15.145141 env[1301]: time="2025-03-17T20:42:15.144868444Z" level=info msg="API listen on /run/docker.sock" Mar 17 20:42:16.987252 env[1151]: time="2025-03-17T20:42:16.987136968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 20:42:17.831285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963485365.mount: Deactivated successfully. Mar 17 20:42:20.252821 env[1151]: time="2025-03-17T20:42:20.252739485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:20.258656 env[1151]: time="2025-03-17T20:42:20.258542488Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:20.266181 env[1151]: time="2025-03-17T20:42:20.266118247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:20.270786 env[1151]: time="2025-03-17T20:42:20.270716120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:20.272422 env[1151]: time="2025-03-17T20:42:20.272364872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 20:42:20.279492 env[1151]: time="2025-03-17T20:42:20.279398524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 20:42:22.749326 env[1151]: time="2025-03-17T20:42:22.749230268Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:22.752899 env[1151]: time="2025-03-17T20:42:22.752831252Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:22.756836 env[1151]: time="2025-03-17T20:42:22.756783755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:22.760280 env[1151]: time="2025-03-17T20:42:22.760204993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:22.762526 env[1151]: time="2025-03-17T20:42:22.762468972Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 20:42:22.763858 env[1151]: time="2025-03-17T20:42:22.763800908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 20:42:23.284481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 20:42:23.285026 systemd[1]: Stopped kubelet.service. Mar 17 20:42:23.288525 systemd[1]: Starting kubelet.service... Mar 17 20:42:23.440808 systemd[1]: Started kubelet.service. Mar 17 20:42:23.591140 kubelet[1429]: E0317 20:42:23.590966 1429 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:42:23.594669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:42:23.595085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:42:24.859449 env[1151]: time="2025-03-17T20:42:24.859337488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:24.865486 env[1151]: time="2025-03-17T20:42:24.865416487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:24.868930 env[1151]: time="2025-03-17T20:42:24.868848880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:24.873375 env[1151]: time="2025-03-17T20:42:24.873310212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:24.875372 env[1151]: time="2025-03-17T20:42:24.875283898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 20:42:24.877006 env[1151]: time="2025-03-17T20:42:24.876796484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 20:42:26.457620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458413751.mount: Deactivated successfully. Mar 17 20:42:27.941784 env[1151]: time="2025-03-17T20:42:27.941693558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:27.947351 env[1151]: time="2025-03-17T20:42:27.947296203Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:27.950606 env[1151]: time="2025-03-17T20:42:27.950527377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:27.954071 env[1151]: time="2025-03-17T20:42:27.953995855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:27.956615 env[1151]: time="2025-03-17T20:42:27.956542464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 20:42:27.960085 env[1151]: time="2025-03-17T20:42:27.960014399Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 20:42:28.592283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416688816.mount: Deactivated successfully. Mar 17 20:42:30.170029 env[1151]: time="2025-03-17T20:42:30.169918721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:30.177320 env[1151]: time="2025-03-17T20:42:30.177244147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:30.180832 env[1151]: time="2025-03-17T20:42:30.180780026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:30.186708 env[1151]: time="2025-03-17T20:42:30.186607246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:30.187516 env[1151]: time="2025-03-17T20:42:30.187381734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 20:42:30.192253 env[1151]: time="2025-03-17T20:42:30.192192479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 20:42:31.178313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652891564.mount: Deactivated successfully. Mar 17 20:42:31.196840 env[1151]: time="2025-03-17T20:42:31.196720846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:31.202267 env[1151]: time="2025-03-17T20:42:31.202190617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:31.206846 env[1151]: time="2025-03-17T20:42:31.206768775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:31.210200 env[1151]: time="2025-03-17T20:42:31.210128104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:31.212020 env[1151]: time="2025-03-17T20:42:31.211945349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 20:42:31.214590 env[1151]: time="2025-03-17T20:42:31.214511067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 20:42:31.879753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483745370.mount: Deactivated successfully. Mar 17 20:42:33.111705 update_engine[1144]: I0317 20:42:33.111037 1144 update_attempter.cc:509] Updating boot flags... Mar 17 20:42:33.746459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 20:42:33.747048 systemd[1]: Stopped kubelet.service. Mar 17 20:42:33.752688 systemd[1]: Starting kubelet.service... Mar 17 20:42:34.317397 systemd[1]: Started kubelet.service. Mar 17 20:42:34.484079 kubelet[1454]: E0317 20:42:34.484001 1454 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:42:34.485471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:42:34.485601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:42:36.663522 env[1151]: time="2025-03-17T20:42:36.663284808Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:36.863883 env[1151]: time="2025-03-17T20:42:36.863759001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:36.874301 env[1151]: time="2025-03-17T20:42:36.871424769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:36.879878 env[1151]: time="2025-03-17T20:42:36.879774766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:36.881525 env[1151]: time="2025-03-17T20:42:36.881411504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 20:42:41.796118 systemd[1]: Stopped kubelet.service. Mar 17 20:42:41.804220 systemd[1]: Starting kubelet.service... Mar 17 20:42:41.848165 systemd[1]: Reloading. Mar 17 20:42:42.009913 /usr/lib/systemd/system-generators/torcx-generator[1504]: time="2025-03-17T20:42:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:42:42.019783 /usr/lib/systemd/system-generators/torcx-generator[1504]: time="2025-03-17T20:42:42Z" level=info msg="torcx already run" Mar 17 20:42:42.185853 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:42:42.186147 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:42:42.209909 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:42:42.339312 systemd[1]: Started kubelet.service. Mar 17 20:42:42.345885 systemd[1]: Stopping kubelet.service... Mar 17 20:42:42.346922 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:42:42.347321 systemd[1]: Stopped kubelet.service. Mar 17 20:42:42.351283 systemd[1]: Starting kubelet.service... Mar 17 20:42:42.443290 systemd[1]: Started kubelet.service. Mar 17 20:42:42.787401 kubelet[1558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:42:42.788111 kubelet[1558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 20:42:42.789451 kubelet[1558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:42:42.790086 kubelet[1558]: I0317 20:42:42.789983 1558 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:42:43.718553 kubelet[1558]: I0317 20:42:43.718419 1558 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 20:42:43.722407 kubelet[1558]: I0317 20:42:43.720089 1558 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:42:43.722407 kubelet[1558]: I0317 20:42:43.720497 1558 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 20:42:43.793098 kubelet[1558]: E0317 20:42:43.793028 1558 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:43.801442 kubelet[1558]: I0317 20:42:43.801401 1558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:42:43.845486 kubelet[1558]: E0317 20:42:43.845399 1558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 20:42:43.845486 kubelet[1558]: I0317 20:42:43.845484 1558 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 20:42:43.852841 kubelet[1558]: I0317 20:42:43.852796 1558 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:42:43.853745 kubelet[1558]: I0317 20:42:43.853675 1558 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:42:43.854489 kubelet[1558]: I0317 20:42:43.853903 1558 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-0-2f3ee5d9b1.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 20:42:43.854971 kubelet[1558]: I0317 20:42:43.854938 1558 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:42:43.855136 kubelet[1558]: I0317 20:42:43.855115 1558 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 20:42:43.855601 kubelet[1558]: I0317 20:42:43.855571 1558 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:42:43.866354 kubelet[1558]: I0317 20:42:43.866324 1558 kubelet.go:446] "Attempting to sync node with API server" Mar 17 20:42:43.866538 kubelet[1558]: I0317 20:42:43.866514 1558 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:42:43.866939 kubelet[1558]: I0317 20:42:43.866910 1558 kubelet.go:352] "Adding apiserver pod source" Mar 17 20:42:43.867192 kubelet[1558]: I0317 20:42:43.867163 1558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:42:43.890822 kubelet[1558]: W0317 20:42:43.890582 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0-2f3ee5d9b1.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:43.891004 kubelet[1558]: E0317 20:42:43.890845 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0-2f3ee5d9b1.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:43.892997 kubelet[1558]: I0317 20:42:43.891587 1558 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 20:42:43.892997 kubelet[1558]: I0317 20:42:43.892712 1558 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:42:43.897253 kubelet[1558]: W0317 20:42:43.897216 1558 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 20:42:43.909432 kubelet[1558]: I0317 20:42:43.909396 1558 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 20:42:43.909723 kubelet[1558]: I0317 20:42:43.909696 1558 server.go:1287] "Started kubelet" Mar 17 20:42:43.912511 kubelet[1558]: W0317 20:42:43.912405 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:43.912693 kubelet[1558]: E0317 20:42:43.912528 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:43.914389 kubelet[1558]: I0317 20:42:43.912823 1558 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:42:43.920598 kubelet[1558]: I0317 20:42:43.920414 1558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:42:43.921313 kubelet[1558]: I0317 20:42:43.921277 1558 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:42:43.923691 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 20:42:43.923821 kubelet[1558]: I0317 20:42:43.922988 1558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:42:43.928249 kubelet[1558]: E0317 20:42:43.924938 1558 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.253:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-0-2f3ee5d9b1.novalocal.182db1d615d44ed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-0-2f3ee5d9b1.novalocal,UID:ci-3510-3-7-0-2f3ee5d9b1.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-0-2f3ee5d9b1.novalocal,},FirstTimestamp:2025-03-17 20:42:43.90960303 +0000 UTC m=+1.458448288,LastTimestamp:2025-03-17 20:42:43.90960303 +0000 UTC m=+1.458448288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-0-2f3ee5d9b1.novalocal,}" Mar 17 20:42:43.931616 kubelet[1558]: E0317 20:42:43.931574 1558 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 20:42:43.933839 kubelet[1558]: I0317 20:42:43.933802 1558 server.go:490] "Adding debug handlers to kubelet server" Mar 17 20:42:43.934518 kubelet[1558]: I0317 20:42:43.934466 1558 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 20:42:43.935237 kubelet[1558]: E0317 20:42:43.935181 1558 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" Mar 17 20:42:43.936527 kubelet[1558]: I0317 20:42:43.936489 1558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 20:42:43.941015 kubelet[1558]: I0317 20:42:43.940067 1558 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:42:43.941015 kubelet[1558]: I0317 20:42:43.940951 1558 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:42:43.946963 kubelet[1558]: E0317 20:42:43.946624 1558 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0-2f3ee5d9b1.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="200ms" Mar 17 20:42:43.947912 kubelet[1558]: W0317 20:42:43.947824 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:43.948043 kubelet[1558]: E0317 20:42:43.947909 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:43.958452 kubelet[1558]: I0317 20:42:43.958413 1558 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:42:43.958452 kubelet[1558]: I0317 20:42:43.958434 1558 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:42:43.958767 kubelet[1558]: I0317 20:42:43.958567 1558 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:42:43.980034 kubelet[1558]: I0317 20:42:43.979918 1558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:42:43.983925 kubelet[1558]: I0317 20:42:43.983886 1558 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:42:43.984086 kubelet[1558]: I0317 20:42:43.984076 1558 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 20:42:43.984274 kubelet[1558]: I0317 20:42:43.984262 1558 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 20:42:43.984408 kubelet[1558]: I0317 20:42:43.984399 1558 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 20:42:43.984573 kubelet[1558]: E0317 20:42:43.984524 1558 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:42:43.988695 kubelet[1558]: W0317 20:42:43.988427 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:43.989933 kubelet[1558]: E0317 20:42:43.989910 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:43.990439 kubelet[1558]: I0317 20:42:43.990425 1558 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 20:42:43.990515 kubelet[1558]: I0317 20:42:43.990504 1558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 20:42:43.990586 kubelet[1558]: I0317 20:42:43.990577 1558 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:42:44.035783 kubelet[1558]: E0317 20:42:44.035722 1558 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" Mar 17 20:42:44.052424 kubelet[1558]: I0317 20:42:44.052403 1558 policy_none.go:49] "None policy: Start" Mar 17 20:42:44.052521 kubelet[1558]: I0317 20:42:44.052509 1558 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 20:42:44.052592 kubelet[1558]: I0317 20:42:44.052582 1558 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:42:44.089340 kubelet[1558]: E0317 20:42:44.089281 1558 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 20:42:44.114543 systemd[1]: Created slice kubepods.slice. Mar 17 20:42:44.126603 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 20:42:44.134409 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 20:42:44.136140 kubelet[1558]: E0317 20:42:44.136087 1558 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" Mar 17 20:42:44.143126 kubelet[1558]: I0317 20:42:44.143064 1558 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:42:44.143482 kubelet[1558]: I0317 20:42:44.143444 1558 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 20:42:44.143622 kubelet[1558]: I0317 20:42:44.143502 1558 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:42:44.147375 kubelet[1558]: I0317 20:42:44.146765 1558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:42:44.149487 kubelet[1558]: E0317 20:42:44.149430 1558 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0-2f3ee5d9b1.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="400ms" Mar 17 20:42:44.150095 kubelet[1558]: E0317 20:42:44.150053 1558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 20:42:44.150234 kubelet[1558]: E0317 20:42:44.150194 1558 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" Mar 17 20:42:44.247507 kubelet[1558]: I0317 20:42:44.247328 1558 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.250415 kubelet[1558]: E0317 20:42:44.250323 1558 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.311835 systemd[1]: Created slice kubepods-burstable-podae6c3ee4aabfa2cd19362c3c09dbac6e.slice. Mar 17 20:42:44.322483 kubelet[1558]: E0317 20:42:44.322091 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.329226 systemd[1]: Created slice kubepods-burstable-pod826ad8bf99baabca0a103692fd59c3de.slice. Mar 17 20:42:44.337101 kubelet[1558]: E0317 20:42:44.337056 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.344589 systemd[1]: Created slice kubepods-burstable-podc0f610eb83885401cc81f853174aff3a.slice. Mar 17 20:42:44.348057 kubelet[1558]: I0317 20:42:44.347986 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/826ad8bf99baabca0a103692fd59c3de-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"826ad8bf99baabca0a103692fd59c3de\") " pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.349950 kubelet[1558]: I0317 20:42:44.349874 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0f610eb83885401cc81f853174aff3a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"c0f610eb83885401cc81f853174aff3a\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350083 kubelet[1558]: I0317 20:42:44.349957 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350083 kubelet[1558]: I0317 20:42:44.350012 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350083 kubelet[1558]: I0317 20:42:44.350059 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350302 kubelet[1558]: I0317 20:42:44.350106 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350302 kubelet[1558]: I0317 20:42:44.350160 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0f610eb83885401cc81f853174aff3a-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"c0f610eb83885401cc81f853174aff3a\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350302 kubelet[1558]: I0317 20:42:44.350205 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0f610eb83885401cc81f853174aff3a-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"c0f610eb83885401cc81f853174aff3a\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350302 kubelet[1558]: I0317 20:42:44.350247 1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.350800 kubelet[1558]: E0317 20:42:44.350740 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.459590 kubelet[1558]: I0317 20:42:44.459534 1558 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.465039 kubelet[1558]: E0317 20:42:44.464907 1558 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.551747 kubelet[1558]: E0317 20:42:44.551602 1558 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0-2f3ee5d9b1.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="800ms" Mar 17 20:42:44.629295 env[1151]: time="2025-03-17T20:42:44.629101170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal,Uid:ae6c3ee4aabfa2cd19362c3c09dbac6e,Namespace:kube-system,Attempt:0,}" Mar 17 20:42:44.640129 env[1151]: time="2025-03-17T20:42:44.640056213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal,Uid:826ad8bf99baabca0a103692fd59c3de,Namespace:kube-system,Attempt:0,}" Mar 17 20:42:44.653035 env[1151]: time="2025-03-17T20:42:44.652865866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal,Uid:c0f610eb83885401cc81f853174aff3a,Namespace:kube-system,Attempt:0,}" Mar 17 20:42:44.736294 kubelet[1558]: W0317 20:42:44.736178 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0-2f3ee5d9b1.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:44.736724 kubelet[1558]: E0317 20:42:44.736621 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-0-2f3ee5d9b1.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:44.868792 kubelet[1558]: I0317 20:42:44.868566 1558 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.871492 kubelet[1558]: E0317 20:42:44.871394 1558 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:44.899953 kubelet[1558]: W0317 20:42:44.899806 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:44.900152 kubelet[1558]: E0317 20:42:44.899946 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:45.169539 kubelet[1558]: W0317 20:42:45.169313 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:45.170238 kubelet[1558]: E0317 20:42:45.170129 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:45.294051 kubelet[1558]: W0317 20:42:45.293984 1558 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.253:6443: connect: connection refused Mar 17 20:42:45.294401 kubelet[1558]: E0317 20:42:45.294358 1558 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:45.353816 kubelet[1558]: E0317 20:42:45.353718 1558 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-0-2f3ee5d9b1.novalocal?timeout=10s\": dial tcp 172.24.4.253:6443: connect: connection refused" interval="1.6s" Mar 17 20:42:45.675581 kubelet[1558]: I0317 20:42:45.674948 1558 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:45.675581 kubelet[1558]: E0317 20:42:45.675518 1558 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.253:6443/api/v1/nodes\": dial tcp 172.24.4.253:6443: connect: connection refused" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:45.845721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494096951.mount: Deactivated successfully. Mar 17 20:42:45.855500 env[1151]: time="2025-03-17T20:42:45.855400134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.861871 env[1151]: time="2025-03-17T20:42:45.861814243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.864084 env[1151]: time="2025-03-17T20:42:45.864031694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.868699 env[1151]: time="2025-03-17T20:42:45.868577710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.874021 env[1151]: time="2025-03-17T20:42:45.873968200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.880324 env[1151]: time="2025-03-17T20:42:45.880269313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.893424 env[1151]: time="2025-03-17T20:42:45.890912210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.902608 env[1151]: time="2025-03-17T20:42:45.902530573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.904317 env[1151]: time="2025-03-17T20:42:45.904249681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.906133 env[1151]: time="2025-03-17T20:42:45.906068049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.908795 env[1151]: time="2025-03-17T20:42:45.908596014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.915949 env[1151]: time="2025-03-17T20:42:45.915876480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:42:45.960443 env[1151]: time="2025-03-17T20:42:45.959374561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:42:45.960443 env[1151]: time="2025-03-17T20:42:45.959415799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:42:45.960936 env[1151]: time="2025-03-17T20:42:45.959429055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:42:45.960936 env[1151]: time="2025-03-17T20:42:45.959615111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a2250bcbb8b94756ae4e6d172be8f6242a4ffd8c9f6d1df821e4534ab1e7999 pid=1598 runtime=io.containerd.runc.v2 Mar 17 20:42:45.976491 kubelet[1558]: E0317 20:42:45.976383 1558 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.253:6443: connect: connection refused" logger="UnhandledError" Mar 17 20:42:45.983884 systemd[1]: Started cri-containerd-8a2250bcbb8b94756ae4e6d172be8f6242a4ffd8c9f6d1df821e4534ab1e7999.scope. Mar 17 20:42:46.002292 env[1151]: time="2025-03-17T20:42:46.001286157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:42:46.002292 env[1151]: time="2025-03-17T20:42:46.001396779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:42:46.002292 env[1151]: time="2025-03-17T20:42:46.001426916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:42:46.002292 env[1151]: time="2025-03-17T20:42:46.001576802Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/346d6156fc6c1aa2a1a1b73192f4f5a9a15c87f0ea205b32104faba309bfad75 pid=1629 runtime=io.containerd.runc.v2 Mar 17 20:42:46.002750 env[1151]: time="2025-03-17T20:42:46.002694739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:42:46.002853 env[1151]: time="2025-03-17T20:42:46.002729255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:42:46.002962 env[1151]: time="2025-03-17T20:42:46.002931200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:42:46.003212 env[1151]: time="2025-03-17T20:42:46.003163163Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86ca37fb8cbead86e09c63289a60040019f02437976004f6ef401a30cf857b46 pid=1633 runtime=io.containerd.runc.v2 Mar 17 20:42:46.024673 systemd[1]: Started cri-containerd-86ca37fb8cbead86e09c63289a60040019f02437976004f6ef401a30cf857b46.scope. Mar 17 20:42:46.035186 systemd[1]: Started cri-containerd-346d6156fc6c1aa2a1a1b73192f4f5a9a15c87f0ea205b32104faba309bfad75.scope. Mar 17 20:42:46.065155 env[1151]: time="2025-03-17T20:42:46.065105669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal,Uid:ae6c3ee4aabfa2cd19362c3c09dbac6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a2250bcbb8b94756ae4e6d172be8f6242a4ffd8c9f6d1df821e4534ab1e7999\"" Mar 17 20:42:46.070573 env[1151]: time="2025-03-17T20:42:46.070507399Z" level=info msg="CreateContainer within sandbox \"8a2250bcbb8b94756ae4e6d172be8f6242a4ffd8c9f6d1df821e4534ab1e7999\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 20:42:46.110680 env[1151]: time="2025-03-17T20:42:46.109712156Z" level=info msg="CreateContainer within sandbox \"8a2250bcbb8b94756ae4e6d172be8f6242a4ffd8c9f6d1df821e4534ab1e7999\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eafbdee7e4ef7f97f84351ddd3492d0238d9173b0c7e0a3b568fe33d517e109b\"" Mar 17 20:42:46.110680 env[1151]: time="2025-03-17T20:42:46.110554445Z" level=info msg="StartContainer for \"eafbdee7e4ef7f97f84351ddd3492d0238d9173b0c7e0a3b568fe33d517e109b\"" Mar 17 20:42:46.113278 env[1151]: time="2025-03-17T20:42:46.113227502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal,Uid:c0f610eb83885401cc81f853174aff3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"86ca37fb8cbead86e09c63289a60040019f02437976004f6ef401a30cf857b46\"" Mar 17 20:42:46.115943 env[1151]: time="2025-03-17T20:42:46.115909967Z" level=info msg="CreateContainer within sandbox \"86ca37fb8cbead86e09c63289a60040019f02437976004f6ef401a30cf857b46\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 20:42:46.136890 systemd[1]: Started cri-containerd-eafbdee7e4ef7f97f84351ddd3492d0238d9173b0c7e0a3b568fe33d517e109b.scope. Mar 17 20:42:46.140933 env[1151]: time="2025-03-17T20:42:46.140775269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal,Uid:826ad8bf99baabca0a103692fd59c3de,Namespace:kube-system,Attempt:0,} returns sandbox id \"346d6156fc6c1aa2a1a1b73192f4f5a9a15c87f0ea205b32104faba309bfad75\"" Mar 17 20:42:46.143215 env[1151]: time="2025-03-17T20:42:46.143180665Z" level=info msg="CreateContainer within sandbox \"86ca37fb8cbead86e09c63289a60040019f02437976004f6ef401a30cf857b46\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41b31fd5e81397ee6db42ada3208b38db4ea226c67830950e8b5d4430bbd40bf\"" Mar 17 20:42:46.145856 env[1151]: time="2025-03-17T20:42:46.145833362Z" level=info msg="StartContainer for \"41b31fd5e81397ee6db42ada3208b38db4ea226c67830950e8b5d4430bbd40bf\"" Mar 17 20:42:46.150202 env[1151]: time="2025-03-17T20:42:46.150172572Z" level=info msg="CreateContainer within sandbox \"346d6156fc6c1aa2a1a1b73192f4f5a9a15c87f0ea205b32104faba309bfad75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 20:42:46.169986 systemd[1]: Started cri-containerd-41b31fd5e81397ee6db42ada3208b38db4ea226c67830950e8b5d4430bbd40bf.scope. Mar 17 20:42:46.192388 env[1151]: time="2025-03-17T20:42:46.192299642Z" level=info msg="CreateContainer within sandbox \"346d6156fc6c1aa2a1a1b73192f4f5a9a15c87f0ea205b32104faba309bfad75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cde04d3b54da0c9bbba9f2676ca965847a6cfb84ac222c6780e05dabf42f108b\"" Mar 17 20:42:46.192902 env[1151]: time="2025-03-17T20:42:46.192872066Z" level=info msg="StartContainer for \"cde04d3b54da0c9bbba9f2676ca965847a6cfb84ac222c6780e05dabf42f108b\"" Mar 17 20:42:46.198796 env[1151]: time="2025-03-17T20:42:46.198738283Z" level=info msg="StartContainer for \"eafbdee7e4ef7f97f84351ddd3492d0238d9173b0c7e0a3b568fe33d517e109b\" returns successfully" Mar 17 20:42:46.211381 systemd[1]: Started cri-containerd-cde04d3b54da0c9bbba9f2676ca965847a6cfb84ac222c6780e05dabf42f108b.scope. Mar 17 20:42:46.252580 env[1151]: time="2025-03-17T20:42:46.252526770Z" level=info msg="StartContainer for \"41b31fd5e81397ee6db42ada3208b38db4ea226c67830950e8b5d4430bbd40bf\" returns successfully" Mar 17 20:42:46.297468 env[1151]: time="2025-03-17T20:42:46.297422611Z" level=info msg="StartContainer for \"cde04d3b54da0c9bbba9f2676ca965847a6cfb84ac222c6780e05dabf42f108b\" returns successfully" Mar 17 20:42:47.000921 kubelet[1558]: E0317 20:42:47.000747 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:47.003203 kubelet[1558]: E0317 20:42:47.003176 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:47.004969 kubelet[1558]: E0317 20:42:47.004944 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:47.277770 kubelet[1558]: I0317 20:42:47.277663 1558 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.007980 kubelet[1558]: E0317 20:42:48.007940 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.008392 kubelet[1558]: E0317 20:42:48.008339 1558 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.477071 kubelet[1558]: E0317 20:42:48.477016 1558 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.573724 kubelet[1558]: I0317 20:42:48.573683 1558 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.636675 kubelet[1558]: I0317 20:42:48.636634 1558 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.647915 kubelet[1558]: E0317 20:42:48.647868 1558 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.647915 kubelet[1558]: I0317 20:42:48.647891 1558 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.649353 kubelet[1558]: E0317 20:42:48.649317 1558 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.649353 kubelet[1558]: I0317 20:42:48.649338 1558 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.653425 kubelet[1558]: E0317 20:42:48.653388 1558 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:48.898274 kubelet[1558]: I0317 20:42:48.898220 1558 apiserver.go:52] "Watching apiserver" Mar 17 20:42:48.941279 kubelet[1558]: I0317 20:42:48.941227 1558 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:42:49.008143 kubelet[1558]: I0317 20:42:49.008101 1558 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:49.014238 kubelet[1558]: E0317 20:42:49.014191 1558 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:49.907251 kubelet[1558]: I0317 20:42:49.907037 1558 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:49.927556 kubelet[1558]: W0317 20:42:49.927447 1558 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:50.458184 kubelet[1558]: I0317 20:42:50.458142 1558 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:50.470706 kubelet[1558]: W0317 20:42:50.470658 1558 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:51.269763 systemd[1]: Reloading. Mar 17 20:42:51.418678 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2025-03-17T20:42:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:42:51.418708 /usr/lib/systemd/system-generators/torcx-generator[1853]: time="2025-03-17T20:42:51Z" level=info msg="torcx already run" Mar 17 20:42:51.496878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:42:51.496896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:42:51.520230 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:42:51.637053 kubelet[1558]: I0317 20:42:51.636900 1558 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:42:51.637323 systemd[1]: Stopping kubelet.service... Mar 17 20:42:51.657090 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:42:51.657288 systemd[1]: Stopped kubelet.service. Mar 17 20:42:51.657340 systemd[1]: kubelet.service: Consumed 1.760s CPU time. Mar 17 20:42:51.659281 systemd[1]: Starting kubelet.service... Mar 17 20:42:51.765577 systemd[1]: Started kubelet.service. Mar 17 20:42:51.837947 kubelet[1900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:42:51.837947 kubelet[1900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 20:42:51.837947 kubelet[1900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:42:51.838612 kubelet[1900]: I0317 20:42:51.837954 1900 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:42:51.849366 kubelet[1900]: I0317 20:42:51.849299 1900 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 20:42:51.849366 kubelet[1900]: I0317 20:42:51.849348 1900 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:42:51.849706 kubelet[1900]: I0317 20:42:51.849677 1900 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 20:42:51.851444 kubelet[1900]: I0317 20:42:51.851410 1900 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 20:42:51.853958 kubelet[1900]: I0317 20:42:51.853917 1900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:42:51.858915 kubelet[1900]: E0317 20:42:51.858858 1900 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 20:42:51.858915 kubelet[1900]: I0317 20:42:51.858891 1900 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 20:42:51.861619 kubelet[1900]: I0317 20:42:51.861577 1900 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:42:51.861861 kubelet[1900]: I0317 20:42:51.861809 1900 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:42:51.862064 kubelet[1900]: I0317 20:42:51.861841 1900 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-0-2f3ee5d9b1.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 20:42:51.862064 kubelet[1900]: I0317 20:42:51.862065 1900 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:42:51.862348 kubelet[1900]: I0317 20:42:51.862077 1900 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 20:42:51.862348 kubelet[1900]: I0317 20:42:51.862129 1900 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:42:51.862348 kubelet[1900]: I0317 20:42:51.862250 1900 kubelet.go:446] "Attempting to sync node with API server" Mar 17 20:42:51.862348 kubelet[1900]: I0317 20:42:51.862283 1900 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:42:51.862348 kubelet[1900]: I0317 20:42:51.862311 1900 kubelet.go:352] "Adding apiserver pod source" Mar 17 20:42:51.862348 kubelet[1900]: I0317 20:42:51.862327 1900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:42:51.865178 kubelet[1900]: I0317 20:42:51.865129 1900 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 20:42:51.865679 kubelet[1900]: I0317 20:42:51.865651 1900 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:42:51.866200 kubelet[1900]: I0317 20:42:51.866128 1900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 20:42:51.866200 kubelet[1900]: I0317 20:42:51.866163 1900 server.go:1287] "Started kubelet" Mar 17 20:42:51.871662 kubelet[1900]: I0317 20:42:51.868323 1900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:42:51.879852 kubelet[1900]: I0317 20:42:51.879795 1900 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:42:51.881243 kubelet[1900]: I0317 20:42:51.881182 1900 server.go:490] "Adding debug handlers to kubelet server" Mar 17 20:42:51.883102 kubelet[1900]: I0317 20:42:51.882965 1900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:42:51.883263 kubelet[1900]: I0317 20:42:51.883238 1900 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:42:51.884934 kubelet[1900]: I0317 20:42:51.884908 1900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 20:42:51.886538 kubelet[1900]: I0317 20:42:51.886512 1900 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 20:42:51.886805 kubelet[1900]: E0317 20:42:51.886772 1900 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" not found" Mar 17 20:42:51.887346 kubelet[1900]: I0317 20:42:51.887294 1900 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:42:51.887455 kubelet[1900]: I0317 20:42:51.887413 1900 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:42:51.890825 kubelet[1900]: I0317 20:42:51.890789 1900 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:42:51.891227 kubelet[1900]: I0317 20:42:51.891176 1900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:42:51.905666 kubelet[1900]: I0317 20:42:51.901023 1900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:42:51.905666 kubelet[1900]: I0317 20:42:51.901853 1900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:42:51.905666 kubelet[1900]: I0317 20:42:51.901873 1900 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 20:42:51.905666 kubelet[1900]: I0317 20:42:51.901894 1900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 20:42:51.905666 kubelet[1900]: I0317 20:42:51.901901 1900 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 20:42:51.905666 kubelet[1900]: E0317 20:42:51.901946 1900 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:42:51.919841 kubelet[1900]: I0317 20:42:51.919810 1900 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:42:51.965007 kubelet[1900]: I0317 20:42:51.964980 1900 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 20:42:51.965007 kubelet[1900]: I0317 20:42:51.965000 1900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 20:42:51.965007 kubelet[1900]: I0317 20:42:51.965018 1900 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:42:51.965268 kubelet[1900]: I0317 20:42:51.965242 1900 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 20:42:51.965306 kubelet[1900]: I0317 20:42:51.965255 1900 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 20:42:51.965306 kubelet[1900]: I0317 20:42:51.965299 1900 policy_none.go:49] "None policy: Start" Mar 17 20:42:51.965364 kubelet[1900]: I0317 20:42:51.965310 1900 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 20:42:51.965364 kubelet[1900]: I0317 20:42:51.965321 1900 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:42:51.965761 kubelet[1900]: I0317 20:42:51.965737 1900 state_mem.go:75] "Updated machine memory state" Mar 17 20:42:51.971597 kubelet[1900]: I0317 20:42:51.971549 1900 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:42:51.971798 kubelet[1900]: I0317 20:42:51.971785 1900 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 20:42:51.971846 kubelet[1900]: I0317 20:42:51.971799 1900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:42:51.972808 kubelet[1900]: I0317 20:42:51.972553 1900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:42:51.978576 kubelet[1900]: E0317 20:42:51.978524 1900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 20:42:52.002669 kubelet[1900]: I0317 20:42:52.002613 1900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.006923 kubelet[1900]: I0317 20:42:52.003023 1900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.006923 kubelet[1900]: I0317 20:42:52.006757 1900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.043326 kubelet[1900]: W0317 20:42:52.043286 1900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:52.043780 kubelet[1900]: W0317 20:42:52.043736 1900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:52.043949 kubelet[1900]: E0317 20:42:52.043732 1900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.044984 kubelet[1900]: W0317 20:42:52.044953 1900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:52.045205 kubelet[1900]: E0317 20:42:52.044995 1900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.075042 kubelet[1900]: I0317 20:42:52.074990 1900 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.087772 kubelet[1900]: I0317 20:42:52.087707 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0f610eb83885401cc81f853174aff3a-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"c0f610eb83885401cc81f853174aff3a\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.087772 kubelet[1900]: I0317 20:42:52.087746 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0f610eb83885401cc81f853174aff3a-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"c0f610eb83885401cc81f853174aff3a\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.087772 kubelet[1900]: I0317 20:42:52.087774 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.088201 kubelet[1900]: I0317 20:42:52.087795 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.088201 kubelet[1900]: I0317 20:42:52.087828 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.088201 kubelet[1900]: I0317 20:42:52.087849 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0f610eb83885401cc81f853174aff3a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"c0f610eb83885401cc81f853174aff3a\") " pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.088201 kubelet[1900]: I0317 20:42:52.087869 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.088505 kubelet[1900]: I0317 20:42:52.087889 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ae6c3ee4aabfa2cd19362c3c09dbac6e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"ae6c3ee4aabfa2cd19362c3c09dbac6e\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.088505 kubelet[1900]: I0317 20:42:52.087909 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/826ad8bf99baabca0a103692fd59c3de-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" (UID: \"826ad8bf99baabca0a103692fd59c3de\") " pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.101339 kubelet[1900]: I0317 20:42:52.101295 1900 kubelet_node_status.go:125] "Node was previously registered" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.101645 kubelet[1900]: I0317 20:42:52.101602 1900 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.265467 sudo[1931]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 20:42:52.266769 sudo[1931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 20:42:52.877657 kubelet[1900]: I0317 20:42:52.877599 1900 apiserver.go:52] "Watching apiserver" Mar 17 20:42:52.887682 kubelet[1900]: I0317 20:42:52.887652 1900 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:42:52.918285 kubelet[1900]: I0317 20:42:52.917757 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" podStartSLOduration=3.917693608 podStartE2EDuration="3.917693608s" podCreationTimestamp="2025-03-17 20:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:42:52.917564673 +0000 UTC m=+1.140561131" watchObservedRunningTime="2025-03-17 20:42:52.917693608 +0000 UTC m=+1.140690046" Mar 17 20:42:52.941133 kubelet[1900]: I0317 20:42:52.941050 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" podStartSLOduration=0.941026562 podStartE2EDuration="941.026562ms" podCreationTimestamp="2025-03-17 20:42:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:42:52.929206152 +0000 UTC m=+1.152202600" watchObservedRunningTime="2025-03-17 20:42:52.941026562 +0000 UTC m=+1.164023000" Mar 17 20:42:52.950866 kubelet[1900]: I0317 20:42:52.950842 1900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.950976 kubelet[1900]: I0317 20:42:52.950786 1900 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.962293 kubelet[1900]: I0317 20:42:52.962219 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-0-2f3ee5d9b1.novalocal" podStartSLOduration=2.96219789 podStartE2EDuration="2.96219789s" podCreationTimestamp="2025-03-17 20:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:42:52.941777748 +0000 UTC m=+1.164774207" watchObservedRunningTime="2025-03-17 20:42:52.96219789 +0000 UTC m=+1.185194328" Mar 17 20:42:52.964845 kubelet[1900]: W0317 20:42:52.964818 1900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:52.964903 kubelet[1900]: E0317 20:42:52.964876 1900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:52.965208 kubelet[1900]: W0317 20:42:52.965186 1900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:42:52.965266 kubelet[1900]: E0317 20:42:52.965221 1900 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-7-0-2f3ee5d9b1.novalocal" Mar 17 20:42:53.025995 sudo[1931]: pam_unix(sudo:session): session closed for user root Mar 17 20:42:55.509676 kubelet[1900]: I0317 20:42:55.509273 1900 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 20:42:55.511096 kubelet[1900]: I0317 20:42:55.510278 1900 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 20:42:55.511143 env[1151]: time="2025-03-17T20:42:55.509907504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 20:42:56.093756 sudo[1291]: pam_unix(sudo:session): session closed for user root Mar 17 20:42:56.125869 systemd[1]: Created slice kubepods-besteffort-podc0a92826_3be1_4970_87fc_8c092eb2592c.slice. Mar 17 20:42:56.145566 systemd[1]: Created slice kubepods-burstable-pod0a725dc2_02b5_45fe_8ef2_0f6f073a880f.slice. Mar 17 20:42:56.234089 kubelet[1900]: E0317 20:42:56.233987 1900 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6pgg9 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-6hjrf" podUID="0a725dc2-02b5-45fe-8ef2-0f6f073a880f" Mar 17 20:42:56.312936 kubelet[1900]: I0317 20:42:56.312877 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-kernel\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.312936 kubelet[1900]: I0317 20:42:56.312927 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-xtables-lock\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313130 kubelet[1900]: I0317 20:42:56.312966 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-clustermesh-secrets\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313130 kubelet[1900]: I0317 20:42:56.312989 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9tv\" (UniqueName: \"kubernetes.io/projected/c0a92826-3be1-4970-87fc-8c092eb2592c-kube-api-access-hw9tv\") pod \"kube-proxy-p4ckx\" (UID: \"c0a92826-3be1-4970-87fc-8c092eb2592c\") " pod="kube-system/kube-proxy-p4ckx" Mar 17 20:42:56.313130 kubelet[1900]: I0317 20:42:56.313010 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cni-path\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313130 kubelet[1900]: I0317 20:42:56.313043 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-etc-cni-netd\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313130 kubelet[1900]: I0317 20:42:56.313062 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-config-path\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313130 kubelet[1900]: I0317 20:42:56.313080 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hubble-tls\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313292 kubelet[1900]: I0317 20:42:56.313098 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0a92826-3be1-4970-87fc-8c092eb2592c-kube-proxy\") pod \"kube-proxy-p4ckx\" (UID: \"c0a92826-3be1-4970-87fc-8c092eb2592c\") " pod="kube-system/kube-proxy-p4ckx" Mar 17 20:42:56.313292 kubelet[1900]: I0317 20:42:56.313133 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0a92826-3be1-4970-87fc-8c092eb2592c-lib-modules\") pod \"kube-proxy-p4ckx\" (UID: \"c0a92826-3be1-4970-87fc-8c092eb2592c\") " pod="kube-system/kube-proxy-p4ckx" Mar 17 20:42:56.313292 kubelet[1900]: I0317 20:42:56.313153 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-cgroup\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313292 kubelet[1900]: I0317 20:42:56.313172 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0a92826-3be1-4970-87fc-8c092eb2592c-xtables-lock\") pod \"kube-proxy-p4ckx\" (UID: \"c0a92826-3be1-4970-87fc-8c092eb2592c\") " pod="kube-system/kube-proxy-p4ckx" Mar 17 20:42:56.313292 kubelet[1900]: I0317 20:42:56.313214 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-run\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313292 kubelet[1900]: I0317 20:42:56.313234 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-bpf-maps\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313463 kubelet[1900]: I0317 20:42:56.313250 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pgg9\" (UniqueName: \"kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-kube-api-access-6pgg9\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313463 kubelet[1900]: I0317 20:42:56.313270 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-lib-modules\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313463 kubelet[1900]: I0317 20:42:56.313320 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hostproc\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.313463 kubelet[1900]: I0317 20:42:56.313338 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-net\") pod \"cilium-6hjrf\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " pod="kube-system/cilium-6hjrf" Mar 17 20:42:56.346485 sshd[1277]: pam_unix(sshd:session): session closed for user core Mar 17 20:42:56.350016 systemd-logind[1140]: Session 7 logged out. Waiting for processes to exit. Mar 17 20:42:56.351218 systemd[1]: sshd@6-172.24.4.253:22-172.24.4.1:34116.service: Deactivated successfully. Mar 17 20:42:56.352019 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 20:42:56.352147 systemd[1]: session-7.scope: Consumed 8.864s CPU time. Mar 17 20:42:56.353421 systemd-logind[1140]: Removed session 7. Mar 17 20:42:56.416179 kubelet[1900]: I0317 20:42:56.415586 1900 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 20:42:56.559555 systemd[1]: Created slice kubepods-besteffort-pod378d1445_161d_4308_bf2e_31133d8a34c4.slice. Mar 17 20:42:56.564244 kubelet[1900]: I0317 20:42:56.564177 1900 status_manager.go:890] "Failed to get status for pod" podUID="378d1445-161d-4308-bf2e-31133d8a34c4" pod="kube-system/cilium-operator-6c4d7847fc-zq4rs" err="pods \"cilium-operator-6c4d7847fc-zq4rs\" is forbidden: User \"system:node:ci-3510-3-7-0-2f3ee5d9b1.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-0-2f3ee5d9b1.novalocal' and this object" Mar 17 20:42:56.617185 kubelet[1900]: I0317 20:42:56.616261 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/378d1445-161d-4308-bf2e-31133d8a34c4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zq4rs\" (UID: \"378d1445-161d-4308-bf2e-31133d8a34c4\") " pod="kube-system/cilium-operator-6c4d7847fc-zq4rs" Mar 17 20:42:56.617615 kubelet[1900]: I0317 20:42:56.617501 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqcg5\" (UniqueName: \"kubernetes.io/projected/378d1445-161d-4308-bf2e-31133d8a34c4-kube-api-access-gqcg5\") pod \"cilium-operator-6c4d7847fc-zq4rs\" (UID: \"378d1445-161d-4308-bf2e-31133d8a34c4\") " pod="kube-system/cilium-operator-6c4d7847fc-zq4rs" Mar 17 20:42:56.735328 env[1151]: time="2025-03-17T20:42:56.734125339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4ckx,Uid:c0a92826-3be1-4970-87fc-8c092eb2592c,Namespace:kube-system,Attempt:0,}" Mar 17 20:42:56.783128 env[1151]: time="2025-03-17T20:42:56.782665584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:42:56.783128 env[1151]: time="2025-03-17T20:42:56.782755525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:42:56.783128 env[1151]: time="2025-03-17T20:42:56.782788357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:42:56.786254 env[1151]: time="2025-03-17T20:42:56.786163158Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cee740a81a7e05365dd6ced5347e73daaa858ff896dba9086930e9494911694 pid=1985 runtime=io.containerd.runc.v2 Mar 17 20:42:56.811165 systemd[1]: Started cri-containerd-9cee740a81a7e05365dd6ced5347e73daaa858ff896dba9086930e9494911694.scope. Mar 17 20:42:56.840573 env[1151]: time="2025-03-17T20:42:56.840523286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4ckx,Uid:c0a92826-3be1-4970-87fc-8c092eb2592c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cee740a81a7e05365dd6ced5347e73daaa858ff896dba9086930e9494911694\"" Mar 17 20:42:56.845618 env[1151]: time="2025-03-17T20:42:56.845580453Z" level=info msg="CreateContainer within sandbox \"9cee740a81a7e05365dd6ced5347e73daaa858ff896dba9086930e9494911694\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 20:42:56.869260 env[1151]: time="2025-03-17T20:42:56.868892432Z" level=info msg="CreateContainer within sandbox \"9cee740a81a7e05365dd6ced5347e73daaa858ff896dba9086930e9494911694\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"303cebb229e7763deaf88b685559ccb74dc59a0281d54b425efd9e7b3e2523aa\"" Mar 17 20:42:56.870921 env[1151]: time="2025-03-17T20:42:56.870901166Z" level=info msg="StartContainer for \"303cebb229e7763deaf88b685559ccb74dc59a0281d54b425efd9e7b3e2523aa\"" Mar 17 20:42:56.873851 env[1151]: time="2025-03-17T20:42:56.873823670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zq4rs,Uid:378d1445-161d-4308-bf2e-31133d8a34c4,Namespace:kube-system,Attempt:0,}" Mar 17 20:42:56.889055 systemd[1]: Started cri-containerd-303cebb229e7763deaf88b685559ccb74dc59a0281d54b425efd9e7b3e2523aa.scope. Mar 17 20:42:56.904816 env[1151]: time="2025-03-17T20:42:56.904741082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:42:56.904816 env[1151]: time="2025-03-17T20:42:56.904784244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:42:56.905025 env[1151]: time="2025-03-17T20:42:56.904987559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:42:56.905325 env[1151]: time="2025-03-17T20:42:56.905259604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7 pid=2051 runtime=io.containerd.runc.v2 Mar 17 20:42:56.922447 systemd[1]: Started cri-containerd-b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7.scope. Mar 17 20:42:56.943913 env[1151]: time="2025-03-17T20:42:56.943863453Z" level=info msg="StartContainer for \"303cebb229e7763deaf88b685559ccb74dc59a0281d54b425efd9e7b3e2523aa\" returns successfully" Mar 17 20:42:56.988877 env[1151]: time="2025-03-17T20:42:56.988836112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zq4rs,Uid:378d1445-161d-4308-bf2e-31133d8a34c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\"" Mar 17 20:42:56.992204 env[1151]: time="2025-03-17T20:42:56.992169986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 20:42:57.026182 kubelet[1900]: I0317 20:42:57.026142 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-xtables-lock\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026182 kubelet[1900]: I0317 20:42:57.026193 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-etc-cni-netd\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026382 kubelet[1900]: I0317 20:42:57.026213 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-cgroup\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026382 kubelet[1900]: I0317 20:42:57.026232 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-bpf-maps\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026382 kubelet[1900]: I0317 20:42:57.026258 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-config-path\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026382 kubelet[1900]: I0317 20:42:57.026277 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-run\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026382 kubelet[1900]: I0317 20:42:57.026297 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-kernel\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026382 kubelet[1900]: I0317 20:42:57.026324 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-clustermesh-secrets\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026551 kubelet[1900]: I0317 20:42:57.026340 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cni-path\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026551 kubelet[1900]: I0317 20:42:57.026360 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pgg9\" (UniqueName: \"kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-kube-api-access-6pgg9\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026551 kubelet[1900]: I0317 20:42:57.026381 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hubble-tls\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026551 kubelet[1900]: I0317 20:42:57.026399 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-lib-modules\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026551 kubelet[1900]: I0317 20:42:57.026414 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hostproc\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.026551 kubelet[1900]: I0317 20:42:57.026431 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-net\") pod \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\" (UID: \"0a725dc2-02b5-45fe-8ef2-0f6f073a880f\") " Mar 17 20:42:57.027182 kubelet[1900]: I0317 20:42:57.026872 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cni-path" (OuterVolumeSpecName: "cni-path") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.027677 kubelet[1900]: I0317 20:42:57.027647 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.028717 kubelet[1900]: I0317 20:42:57.027772 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.028837 kubelet[1900]: I0317 20:42:57.027790 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hostproc" (OuterVolumeSpecName: "hostproc") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.028928 kubelet[1900]: I0317 20:42:57.027803 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.029018 kubelet[1900]: I0317 20:42:57.028092 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.029096 kubelet[1900]: I0317 20:42:57.028117 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.029164 kubelet[1900]: I0317 20:42:57.028133 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.029241 kubelet[1900]: I0317 20:42:57.028150 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.029340 kubelet[1900]: I0317 20:42:57.028682 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:42:57.030804 kubelet[1900]: I0317 20:42:57.030782 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-kube-api-access-6pgg9" (OuterVolumeSpecName: "kube-api-access-6pgg9") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "kube-api-access-6pgg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:42:57.031118 kubelet[1900]: I0317 20:42:57.031100 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 20:42:57.031483 kubelet[1900]: I0317 20:42:57.031467 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 20:42:57.033866 kubelet[1900]: I0317 20:42:57.033827 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0a725dc2-02b5-45fe-8ef2-0f6f073a880f" (UID: "0a725dc2-02b5-45fe-8ef2-0f6f073a880f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127334 1900 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-lib-modules\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127368 1900 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hostproc\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127381 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-net\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127394 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-cgroup\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127405 1900 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-bpf-maps\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127416 1900 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-xtables-lock\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.127470 kubelet[1900]: I0317 20:42:57.127427 1900 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-etc-cni-netd\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127438 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-config-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127449 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cilium-run\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127459 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-host-proc-sys-kernel\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127469 1900 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-clustermesh-secrets\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127480 1900 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-cni-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127493 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6pgg9\" (UniqueName: \"kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-kube-api-access-6pgg9\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.128126 kubelet[1900]: I0317 20:42:57.127506 1900 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0a725dc2-02b5-45fe-8ef2-0f6f073a880f-hubble-tls\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:42:57.436743 systemd[1]: var-lib-kubelet-pods-0a725dc2\x2d02b5\x2d45fe\x2d8ef2\x2d0f6f073a880f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 20:42:57.436950 systemd[1]: var-lib-kubelet-pods-0a725dc2\x2d02b5\x2d45fe\x2d8ef2\x2d0f6f073a880f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6pgg9.mount: Deactivated successfully. Mar 17 20:42:57.437110 systemd[1]: var-lib-kubelet-pods-0a725dc2\x2d02b5\x2d45fe\x2d8ef2\x2d0f6f073a880f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 20:42:57.699855 kubelet[1900]: I0317 20:42:57.699533 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p4ckx" podStartSLOduration=1.699395698 podStartE2EDuration="1.699395698s" podCreationTimestamp="2025-03-17 20:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:42:57.005705194 +0000 UTC m=+5.228701622" watchObservedRunningTime="2025-03-17 20:42:57.699395698 +0000 UTC m=+5.922392196" Mar 17 20:42:57.919884 systemd[1]: Removed slice kubepods-burstable-pod0a725dc2_02b5_45fe_8ef2_0f6f073a880f.slice. Mar 17 20:42:58.130365 systemd[1]: Created slice kubepods-burstable-pod3810c7b9_cddb_487d_9002_e7997ea05e95.slice. Mar 17 20:42:58.133931 kubelet[1900]: I0317 20:42:58.133694 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-hostproc\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.133931 kubelet[1900]: I0317 20:42:58.133742 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-lib-modules\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.133931 kubelet[1900]: I0317 20:42:58.133764 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-kernel\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.133931 kubelet[1900]: I0317 20:42:58.133786 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-cgroup\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.133931 kubelet[1900]: I0317 20:42:58.133805 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-hubble-tls\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.133931 kubelet[1900]: I0317 20:42:58.133825 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-run\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134694 kubelet[1900]: I0317 20:42:58.133842 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cni-path\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134694 kubelet[1900]: I0317 20:42:58.133860 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-config-path\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134694 kubelet[1900]: I0317 20:42:58.133879 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmz6v\" (UniqueName: \"kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-kube-api-access-dmz6v\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134694 kubelet[1900]: I0317 20:42:58.133898 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-etc-cni-netd\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134694 kubelet[1900]: I0317 20:42:58.133919 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-xtables-lock\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134694 kubelet[1900]: I0317 20:42:58.133937 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3810c7b9-cddb-487d-9002-e7997ea05e95-clustermesh-secrets\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134874 kubelet[1900]: I0317 20:42:58.133956 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-bpf-maps\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.134874 kubelet[1900]: I0317 20:42:58.133974 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-net\") pod \"cilium-4gnqx\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " pod="kube-system/cilium-4gnqx" Mar 17 20:42:58.435467 env[1151]: time="2025-03-17T20:42:58.434481563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gnqx,Uid:3810c7b9-cddb-487d-9002-e7997ea05e95,Namespace:kube-system,Attempt:0,}" Mar 17 20:42:58.480218 env[1151]: time="2025-03-17T20:42:58.479998132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:42:58.480487 env[1151]: time="2025-03-17T20:42:58.480290736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:42:58.480487 env[1151]: time="2025-03-17T20:42:58.480419680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:42:58.481003 env[1151]: time="2025-03-17T20:42:58.480885400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0 pid=2236 runtime=io.containerd.runc.v2 Mar 17 20:42:58.529987 systemd[1]: Started cri-containerd-ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0.scope. Mar 17 20:42:58.552923 env[1151]: time="2025-03-17T20:42:58.552867050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gnqx,Uid:3810c7b9-cddb-487d-9002-e7997ea05e95,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\"" Mar 17 20:42:59.908582 kubelet[1900]: I0317 20:42:59.908517 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a725dc2-02b5-45fe-8ef2-0f6f073a880f" path="/var/lib/kubelet/pods/0a725dc2-02b5-45fe-8ef2-0f6f073a880f/volumes" Mar 17 20:43:02.612083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876167227.mount: Deactivated successfully. Mar 17 20:43:03.746877 env[1151]: time="2025-03-17T20:43:03.746723721Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:43:03.755608 env[1151]: time="2025-03-17T20:43:03.755565176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:43:03.758855 env[1151]: time="2025-03-17T20:43:03.758832482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:43:03.760334 env[1151]: time="2025-03-17T20:43:03.760270476Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 20:43:03.764986 env[1151]: time="2025-03-17T20:43:03.764955657Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 20:43:03.767608 env[1151]: time="2025-03-17T20:43:03.767583817Z" level=info msg="CreateContainer within sandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 20:43:03.792414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792112946.mount: Deactivated successfully. Mar 17 20:43:03.807925 env[1151]: time="2025-03-17T20:43:03.807889039Z" level=info msg="CreateContainer within sandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\"" Mar 17 20:43:03.810123 env[1151]: time="2025-03-17T20:43:03.810095514Z" level=info msg="StartContainer for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\"" Mar 17 20:43:03.836551 systemd[1]: Started cri-containerd-9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534.scope. Mar 17 20:43:03.888000 env[1151]: time="2025-03-17T20:43:03.887962751Z" level=info msg="StartContainer for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" returns successfully" Mar 17 20:43:04.648592 kubelet[1900]: I0317 20:43:04.648527 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zq4rs" podStartSLOduration=1.877639895 podStartE2EDuration="8.648508435s" podCreationTimestamp="2025-03-17 20:42:56 +0000 UTC" firstStartedPulling="2025-03-17 20:42:56.991283016 +0000 UTC m=+5.214279444" lastFinishedPulling="2025-03-17 20:43:03.762151506 +0000 UTC m=+11.985147984" observedRunningTime="2025-03-17 20:43:04.031779562 +0000 UTC m=+12.254776020" watchObservedRunningTime="2025-03-17 20:43:04.648508435 +0000 UTC m=+12.871504883" Mar 17 20:43:10.705193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398343495.mount: Deactivated successfully. Mar 17 20:43:15.066597 env[1151]: time="2025-03-17T20:43:15.066395187Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:43:15.070412 env[1151]: time="2025-03-17T20:43:15.070390551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:43:15.073409 env[1151]: time="2025-03-17T20:43:15.073389482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:43:15.074236 env[1151]: time="2025-03-17T20:43:15.074210687Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 20:43:15.078273 env[1151]: time="2025-03-17T20:43:15.078173680Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:43:15.095880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899183820.mount: Deactivated successfully. Mar 17 20:43:15.102977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610909044.mount: Deactivated successfully. Mar 17 20:43:15.113501 env[1151]: time="2025-03-17T20:43:15.113466261Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\"" Mar 17 20:43:15.114787 env[1151]: time="2025-03-17T20:43:15.114550952Z" level=info msg="StartContainer for \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\"" Mar 17 20:43:15.149868 systemd[1]: Started cri-containerd-8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543.scope. Mar 17 20:43:15.190820 env[1151]: time="2025-03-17T20:43:15.190776522Z" level=info msg="StartContainer for \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\" returns successfully" Mar 17 20:43:15.201526 systemd[1]: cri-containerd-8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543.scope: Deactivated successfully. Mar 17 20:43:16.095745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543-rootfs.mount: Deactivated successfully. Mar 17 20:43:16.220590 env[1151]: time="2025-03-17T20:43:16.220489777Z" level=info msg="shim disconnected" id=8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543 Mar 17 20:43:16.221334 env[1151]: time="2025-03-17T20:43:16.220605214Z" level=warning msg="cleaning up after shim disconnected" id=8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543 namespace=k8s.io Mar 17 20:43:16.221334 env[1151]: time="2025-03-17T20:43:16.220662622Z" level=info msg="cleaning up dead shim" Mar 17 20:43:16.236283 env[1151]: time="2025-03-17T20:43:16.236168060Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:43:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2355 runtime=io.containerd.runc.v2\n" Mar 17 20:43:17.062248 env[1151]: time="2025-03-17T20:43:17.062134650Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:43:17.099060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810173920.mount: Deactivated successfully. Mar 17 20:43:17.124415 env[1151]: time="2025-03-17T20:43:17.124340462Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\"" Mar 17 20:43:17.127886 env[1151]: time="2025-03-17T20:43:17.127844249Z" level=info msg="StartContainer for \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\"" Mar 17 20:43:17.161853 systemd[1]: Started cri-containerd-95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358.scope. Mar 17 20:43:17.191163 env[1151]: time="2025-03-17T20:43:17.191130081Z" level=info msg="StartContainer for \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\" returns successfully" Mar 17 20:43:17.203991 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:43:17.204547 systemd[1]: Stopped systemd-sysctl.service. Mar 17 20:43:17.205216 systemd[1]: Stopping systemd-sysctl.service... Mar 17 20:43:17.207326 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:43:17.210558 systemd[1]: cri-containerd-95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358.scope: Deactivated successfully. Mar 17 20:43:17.215020 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:43:17.239146 env[1151]: time="2025-03-17T20:43:17.239105025Z" level=info msg="shim disconnected" id=95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358 Mar 17 20:43:17.239581 env[1151]: time="2025-03-17T20:43:17.239562104Z" level=warning msg="cleaning up after shim disconnected" id=95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358 namespace=k8s.io Mar 17 20:43:17.239714 env[1151]: time="2025-03-17T20:43:17.239696386Z" level=info msg="cleaning up dead shim" Mar 17 20:43:17.246740 env[1151]: time="2025-03-17T20:43:17.246717097Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:43:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2418 runtime=io.containerd.runc.v2\n" Mar 17 20:43:18.063350 env[1151]: time="2025-03-17T20:43:18.063261183Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:43:18.103163 systemd[1]: run-containerd-runc-k8s.io-95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358-runc.VtWHdK.mount: Deactivated successfully. Mar 17 20:43:18.103384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358-rootfs.mount: Deactivated successfully. Mar 17 20:43:18.129287 env[1151]: time="2025-03-17T20:43:18.129196837Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\"" Mar 17 20:43:18.133006 env[1151]: time="2025-03-17T20:43:18.132943611Z" level=info msg="StartContainer for \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\"" Mar 17 20:43:18.172501 systemd[1]: run-containerd-runc-k8s.io-7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81-runc.ugkkhV.mount: Deactivated successfully. Mar 17 20:43:18.177743 systemd[1]: Started cri-containerd-7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81.scope. Mar 17 20:43:18.212786 systemd[1]: cri-containerd-7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81.scope: Deactivated successfully. Mar 17 20:43:18.236186 env[1151]: time="2025-03-17T20:43:18.236086792Z" level=info msg="StartContainer for \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\" returns successfully" Mar 17 20:43:18.309750 env[1151]: time="2025-03-17T20:43:18.309665384Z" level=info msg="shim disconnected" id=7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81 Mar 17 20:43:18.310190 env[1151]: time="2025-03-17T20:43:18.310169673Z" level=warning msg="cleaning up after shim disconnected" id=7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81 namespace=k8s.io Mar 17 20:43:18.310281 env[1151]: time="2025-03-17T20:43:18.310266104Z" level=info msg="cleaning up dead shim" Mar 17 20:43:18.320179 env[1151]: time="2025-03-17T20:43:18.320086075Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:43:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2475 runtime=io.containerd.runc.v2\n" Mar 17 20:43:19.078595 env[1151]: time="2025-03-17T20:43:19.078478425Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:43:19.103846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81-rootfs.mount: Deactivated successfully. Mar 17 20:43:19.140091 env[1151]: time="2025-03-17T20:43:19.140038617Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\"" Mar 17 20:43:19.140767 env[1151]: time="2025-03-17T20:43:19.140734986Z" level=info msg="StartContainer for \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\"" Mar 17 20:43:19.163979 systemd[1]: Started cri-containerd-04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476.scope. Mar 17 20:43:19.189985 systemd[1]: cri-containerd-04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476.scope: Deactivated successfully. Mar 17 20:43:19.193593 env[1151]: time="2025-03-17T20:43:19.193460501Z" level=info msg="StartContainer for \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\" returns successfully" Mar 17 20:43:19.216205 env[1151]: time="2025-03-17T20:43:19.216144790Z" level=info msg="shim disconnected" id=04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476 Mar 17 20:43:19.216205 env[1151]: time="2025-03-17T20:43:19.216194063Z" level=warning msg="cleaning up after shim disconnected" id=04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476 namespace=k8s.io Mar 17 20:43:19.216205 env[1151]: time="2025-03-17T20:43:19.216204122Z" level=info msg="cleaning up dead shim" Mar 17 20:43:19.222917 env[1151]: time="2025-03-17T20:43:19.222875421Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:43:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2531 runtime=io.containerd.runc.v2\n" Mar 17 20:43:20.091694 env[1151]: time="2025-03-17T20:43:20.091232886Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:43:20.102307 systemd[1]: run-containerd-runc-k8s.io-04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476-runc.Nnw7G2.mount: Deactivated successfully. Mar 17 20:43:20.102522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476-rootfs.mount: Deactivated successfully. Mar 17 20:43:20.143195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618257075.mount: Deactivated successfully. Mar 17 20:43:20.147590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount479715899.mount: Deactivated successfully. Mar 17 20:43:20.149036 env[1151]: time="2025-03-17T20:43:20.148870580Z" level=info msg="CreateContainer within sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\"" Mar 17 20:43:20.156719 env[1151]: time="2025-03-17T20:43:20.155777341Z" level=info msg="StartContainer for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\"" Mar 17 20:43:20.178256 systemd[1]: Started cri-containerd-ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293.scope. Mar 17 20:43:20.212606 env[1151]: time="2025-03-17T20:43:20.212571118Z" level=info msg="StartContainer for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" returns successfully" Mar 17 20:43:20.384413 kubelet[1900]: I0317 20:43:20.383683 1900 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 20:43:20.440871 systemd[1]: Created slice kubepods-burstable-pod52befd08_ba1b_4181_87df_e6c24dc736b0.slice. Mar 17 20:43:20.447442 systemd[1]: Created slice kubepods-burstable-pod4aec0924_ceef_4b86_b7a7_687d0a59e0a4.slice. Mar 17 20:43:20.517109 kubelet[1900]: I0317 20:43:20.517063 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj766\" (UniqueName: \"kubernetes.io/projected/52befd08-ba1b-4181-87df-e6c24dc736b0-kube-api-access-tj766\") pod \"coredns-668d6bf9bc-rqtmg\" (UID: \"52befd08-ba1b-4181-87df-e6c24dc736b0\") " pod="kube-system/coredns-668d6bf9bc-rqtmg" Mar 17 20:43:20.517386 kubelet[1900]: I0317 20:43:20.517340 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4aec0924-ceef-4b86-b7a7-687d0a59e0a4-config-volume\") pod \"coredns-668d6bf9bc-cpjz8\" (UID: \"4aec0924-ceef-4b86-b7a7-687d0a59e0a4\") " pod="kube-system/coredns-668d6bf9bc-cpjz8" Mar 17 20:43:20.517444 kubelet[1900]: I0317 20:43:20.517407 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9l6p\" (UniqueName: \"kubernetes.io/projected/4aec0924-ceef-4b86-b7a7-687d0a59e0a4-kube-api-access-s9l6p\") pod \"coredns-668d6bf9bc-cpjz8\" (UID: \"4aec0924-ceef-4b86-b7a7-687d0a59e0a4\") " pod="kube-system/coredns-668d6bf9bc-cpjz8" Mar 17 20:43:20.517491 kubelet[1900]: I0317 20:43:20.517451 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52befd08-ba1b-4181-87df-e6c24dc736b0-config-volume\") pod \"coredns-668d6bf9bc-rqtmg\" (UID: \"52befd08-ba1b-4181-87df-e6c24dc736b0\") " pod="kube-system/coredns-668d6bf9bc-rqtmg" Mar 17 20:43:20.746109 env[1151]: time="2025-03-17T20:43:20.745918346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqtmg,Uid:52befd08-ba1b-4181-87df-e6c24dc736b0,Namespace:kube-system,Attempt:0,}" Mar 17 20:43:20.752092 env[1151]: time="2025-03-17T20:43:20.751547916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cpjz8,Uid:4aec0924-ceef-4b86-b7a7-687d0a59e0a4,Namespace:kube-system,Attempt:0,}" Mar 17 20:43:21.264569 kubelet[1900]: I0317 20:43:21.264422 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4gnqx" podStartSLOduration=6.743339896 podStartE2EDuration="23.264385815s" podCreationTimestamp="2025-03-17 20:42:58 +0000 UTC" firstStartedPulling="2025-03-17 20:42:58.55448429 +0000 UTC m=+6.777480728" lastFinishedPulling="2025-03-17 20:43:15.075530219 +0000 UTC m=+23.298526647" observedRunningTime="2025-03-17 20:43:21.263887679 +0000 UTC m=+29.486884207" watchObservedRunningTime="2025-03-17 20:43:21.264385815 +0000 UTC m=+29.487382343" Mar 17 20:43:23.167577 systemd-networkd[983]: cilium_host: Link UP Mar 17 20:43:23.169257 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 20:43:23.169342 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 20:43:23.168036 systemd-networkd[983]: cilium_net: Link UP Mar 17 20:43:23.168398 systemd-networkd[983]: cilium_net: Gained carrier Mar 17 20:43:23.171561 systemd-networkd[983]: cilium_host: Gained carrier Mar 17 20:43:23.296718 systemd-networkd[983]: cilium_vxlan: Link UP Mar 17 20:43:23.296725 systemd-networkd[983]: cilium_vxlan: Gained carrier Mar 17 20:43:23.413929 systemd-networkd[983]: cilium_net: Gained IPv6LL Mar 17 20:43:23.469898 systemd-networkd[983]: cilium_host: Gained IPv6LL Mar 17 20:43:23.654673 kernel: NET: Registered PF_ALG protocol family Mar 17 20:43:24.403492 systemd-networkd[983]: lxc_health: Link UP Mar 17 20:43:24.430944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 20:43:24.430570 systemd-networkd[983]: lxc_health: Gained carrier Mar 17 20:43:24.517766 systemd-networkd[983]: cilium_vxlan: Gained IPv6LL Mar 17 20:43:24.990385 systemd-networkd[983]: lxcf5eb1a30ccd5: Link UP Mar 17 20:43:24.997761 kernel: eth0: renamed from tmp31046 Mar 17 20:43:25.004817 systemd-networkd[983]: lxcf5eb1a30ccd5: Gained carrier Mar 17 20:43:25.009787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf5eb1a30ccd5: link becomes ready Mar 17 20:43:25.032761 systemd-networkd[983]: lxcb0d2bd12690b: Link UP Mar 17 20:43:25.041693 kernel: eth0: renamed from tmpa17f8 Mar 17 20:43:25.055710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb0d2bd12690b: link becomes ready Mar 17 20:43:25.055200 systemd-networkd[983]: lxcb0d2bd12690b: Gained carrier Mar 17 20:43:25.606814 systemd-networkd[983]: lxc_health: Gained IPv6LL Mar 17 20:43:26.341743 systemd-networkd[983]: lxcf5eb1a30ccd5: Gained IPv6LL Mar 17 20:43:26.629891 systemd-networkd[983]: lxcb0d2bd12690b: Gained IPv6LL Mar 17 20:43:29.519946 env[1151]: time="2025-03-17T20:43:29.519869476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:43:29.520279 env[1151]: time="2025-03-17T20:43:29.519915290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:43:29.520279 env[1151]: time="2025-03-17T20:43:29.519935957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:43:29.520619 env[1151]: time="2025-03-17T20:43:29.520570706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/310463922061028165cbe36bb4d61f83af2643e721e1d7db125a86d18318a6a6 pid=3088 runtime=io.containerd.runc.v2 Mar 17 20:43:29.544192 systemd[1]: run-containerd-runc-k8s.io-310463922061028165cbe36bb4d61f83af2643e721e1d7db125a86d18318a6a6-runc.vTye0Q.mount: Deactivated successfully. Mar 17 20:43:29.546448 systemd[1]: Started cri-containerd-310463922061028165cbe36bb4d61f83af2643e721e1d7db125a86d18318a6a6.scope. Mar 17 20:43:29.624499 env[1151]: time="2025-03-17T20:43:29.617348297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:43:29.624499 env[1151]: time="2025-03-17T20:43:29.617423644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:43:29.624499 env[1151]: time="2025-03-17T20:43:29.617439022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:43:29.624499 env[1151]: time="2025-03-17T20:43:29.617591508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a17f87fadd227e3baca01baad807f93a52e1b9f55199ea196bd65cae651e4777 pid=3122 runtime=io.containerd.runc.v2 Mar 17 20:43:29.640432 systemd[1]: Started cri-containerd-a17f87fadd227e3baca01baad807f93a52e1b9f55199ea196bd65cae651e4777.scope. Mar 17 20:43:29.663614 env[1151]: time="2025-03-17T20:43:29.663574401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cpjz8,Uid:4aec0924-ceef-4b86-b7a7-687d0a59e0a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"310463922061028165cbe36bb4d61f83af2643e721e1d7db125a86d18318a6a6\"" Mar 17 20:43:29.670938 env[1151]: time="2025-03-17T20:43:29.670888106Z" level=info msg="CreateContainer within sandbox \"310463922061028165cbe36bb4d61f83af2643e721e1d7db125a86d18318a6a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:43:29.703216 env[1151]: time="2025-03-17T20:43:29.702557407Z" level=info msg="CreateContainer within sandbox \"310463922061028165cbe36bb4d61f83af2643e721e1d7db125a86d18318a6a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f57aba226d836261cd5b11da63bf76b48d0a4f512c32799d21ed7bb3e1d8ef6\"" Mar 17 20:43:29.704526 env[1151]: time="2025-03-17T20:43:29.704396409Z" level=info msg="StartContainer for \"8f57aba226d836261cd5b11da63bf76b48d0a4f512c32799d21ed7bb3e1d8ef6\"" Mar 17 20:43:29.718789 env[1151]: time="2025-03-17T20:43:29.717425143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqtmg,Uid:52befd08-ba1b-4181-87df-e6c24dc736b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a17f87fadd227e3baca01baad807f93a52e1b9f55199ea196bd65cae651e4777\"" Mar 17 20:43:29.723141 env[1151]: time="2025-03-17T20:43:29.723105009Z" level=info msg="CreateContainer within sandbox \"a17f87fadd227e3baca01baad807f93a52e1b9f55199ea196bd65cae651e4777\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:43:29.745115 systemd[1]: Started cri-containerd-8f57aba226d836261cd5b11da63bf76b48d0a4f512c32799d21ed7bb3e1d8ef6.scope. Mar 17 20:43:29.747168 env[1151]: time="2025-03-17T20:43:29.746928483Z" level=info msg="CreateContainer within sandbox \"a17f87fadd227e3baca01baad807f93a52e1b9f55199ea196bd65cae651e4777\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"705beb6ffa6d82c765aaff3b7beb9b7da00d333121eb385dd9ec11c10a1f2bbd\"" Mar 17 20:43:29.748279 env[1151]: time="2025-03-17T20:43:29.748256908Z" level=info msg="StartContainer for \"705beb6ffa6d82c765aaff3b7beb9b7da00d333121eb385dd9ec11c10a1f2bbd\"" Mar 17 20:43:29.782334 systemd[1]: Started cri-containerd-705beb6ffa6d82c765aaff3b7beb9b7da00d333121eb385dd9ec11c10a1f2bbd.scope. Mar 17 20:43:29.806781 env[1151]: time="2025-03-17T20:43:29.806745389Z" level=info msg="StartContainer for \"8f57aba226d836261cd5b11da63bf76b48d0a4f512c32799d21ed7bb3e1d8ef6\" returns successfully" Mar 17 20:43:29.821288 env[1151]: time="2025-03-17T20:43:29.821248453Z" level=info msg="StartContainer for \"705beb6ffa6d82c765aaff3b7beb9b7da00d333121eb385dd9ec11c10a1f2bbd\" returns successfully" Mar 17 20:43:30.181747 kubelet[1900]: I0317 20:43:30.181498 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rqtmg" podStartSLOduration=34.181389287 podStartE2EDuration="34.181389287s" podCreationTimestamp="2025-03-17 20:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:43:30.152921933 +0000 UTC m=+38.375918441" watchObservedRunningTime="2025-03-17 20:43:30.181389287 +0000 UTC m=+38.404385785" Mar 17 20:43:30.530533 systemd[1]: run-containerd-runc-k8s.io-a17f87fadd227e3baca01baad807f93a52e1b9f55199ea196bd65cae651e4777-runc.HmVyiv.mount: Deactivated successfully. Mar 17 20:44:28.494244 systemd[1]: Started sshd@7-172.24.4.253:22-172.24.4.1:43052.service. Mar 17 20:44:29.812465 sshd[3256]: Accepted publickey for core from 172.24.4.1 port 43052 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:44:29.815107 sshd[3256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:44:29.821530 systemd[1]: Started session-8.scope. Mar 17 20:44:29.822429 systemd-logind[1140]: New session 8 of user core. Mar 17 20:44:30.654411 sshd[3256]: pam_unix(sshd:session): session closed for user core Mar 17 20:44:30.659585 systemd[1]: sshd@7-172.24.4.253:22-172.24.4.1:43052.service: Deactivated successfully. Mar 17 20:44:30.661389 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 20:44:30.662739 systemd-logind[1140]: Session 8 logged out. Waiting for processes to exit. Mar 17 20:44:30.664585 systemd-logind[1140]: Removed session 8. Mar 17 20:44:35.665349 systemd[1]: Started sshd@8-172.24.4.253:22-172.24.4.1:35352.service. Mar 17 20:44:36.797227 sshd[3269]: Accepted publickey for core from 172.24.4.1 port 35352 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:44:36.799488 sshd[3269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:44:36.810795 systemd-logind[1140]: New session 9 of user core. Mar 17 20:44:36.811186 systemd[1]: Started session-9.scope. Mar 17 20:44:37.648219 sshd[3269]: pam_unix(sshd:session): session closed for user core Mar 17 20:44:37.653736 systemd[1]: sshd@8-172.24.4.253:22-172.24.4.1:35352.service: Deactivated successfully. Mar 17 20:44:37.655793 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 20:44:37.657143 systemd-logind[1140]: Session 9 logged out. Waiting for processes to exit. Mar 17 20:44:37.659535 systemd-logind[1140]: Removed session 9. Mar 17 20:44:42.658329 systemd[1]: Started sshd@9-172.24.4.253:22-172.24.4.1:35356.service. Mar 17 20:44:44.085295 sshd[3282]: Accepted publickey for core from 172.24.4.1 port 35356 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:44:44.088061 sshd[3282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:44:44.099386 systemd-logind[1140]: New session 10 of user core. Mar 17 20:44:44.099854 systemd[1]: Started session-10.scope. Mar 17 20:44:44.815506 sshd[3282]: pam_unix(sshd:session): session closed for user core Mar 17 20:44:44.820847 systemd[1]: sshd@9-172.24.4.253:22-172.24.4.1:35356.service: Deactivated successfully. Mar 17 20:44:44.822471 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 20:44:44.823933 systemd-logind[1140]: Session 10 logged out. Waiting for processes to exit. Mar 17 20:44:44.826088 systemd-logind[1140]: Removed session 10. Mar 17 20:44:49.837910 systemd[1]: Started sshd@10-172.24.4.253:22-172.24.4.1:37078.service. Mar 17 20:44:51.161038 sshd[3295]: Accepted publickey for core from 172.24.4.1 port 37078 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:44:51.164374 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:44:51.176088 systemd[1]: Started session-11.scope. Mar 17 20:44:51.177866 systemd-logind[1140]: New session 11 of user core. Mar 17 20:44:51.976769 sshd[3295]: pam_unix(sshd:session): session closed for user core Mar 17 20:44:51.982899 systemd[1]: sshd@10-172.24.4.253:22-172.24.4.1:37078.service: Deactivated successfully. Mar 17 20:44:51.984366 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 20:44:51.985588 systemd-logind[1140]: Session 11 logged out. Waiting for processes to exit. Mar 17 20:44:51.988554 systemd[1]: Started sshd@11-172.24.4.253:22-172.24.4.1:37090.service. Mar 17 20:44:51.993448 systemd-logind[1140]: Removed session 11. Mar 17 20:44:53.381132 sshd[3310]: Accepted publickey for core from 172.24.4.1 port 37090 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:44:53.384423 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:44:53.396211 systemd-logind[1140]: New session 12 of user core. Mar 17 20:44:53.396857 systemd[1]: Started session-12.scope. Mar 17 20:44:54.227828 sshd[3310]: pam_unix(sshd:session): session closed for user core Mar 17 20:44:54.232948 systemd[1]: sshd@11-172.24.4.253:22-172.24.4.1:37090.service: Deactivated successfully. Mar 17 20:44:54.234517 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 20:44:54.235859 systemd-logind[1140]: Session 12 logged out. Waiting for processes to exit. Mar 17 20:44:54.238597 systemd[1]: Started sshd@12-172.24.4.253:22-172.24.4.1:36302.service. Mar 17 20:44:54.242110 systemd-logind[1140]: Removed session 12. Mar 17 20:44:55.696144 sshd[3320]: Accepted publickey for core from 172.24.4.1 port 36302 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:44:55.698428 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:44:55.706792 systemd-logind[1140]: New session 13 of user core. Mar 17 20:44:55.708795 systemd[1]: Started session-13.scope. Mar 17 20:44:56.479770 sshd[3320]: pam_unix(sshd:session): session closed for user core Mar 17 20:44:56.484708 systemd[1]: sshd@12-172.24.4.253:22-172.24.4.1:36302.service: Deactivated successfully. Mar 17 20:44:56.486883 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 20:44:56.488219 systemd-logind[1140]: Session 13 logged out. Waiting for processes to exit. Mar 17 20:44:56.490446 systemd-logind[1140]: Removed session 13. Mar 17 20:45:01.488798 systemd[1]: Started sshd@13-172.24.4.253:22-172.24.4.1:36316.service. Mar 17 20:45:02.896003 sshd[3335]: Accepted publickey for core from 172.24.4.1 port 36316 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:02.897030 sshd[3335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:02.909760 systemd-logind[1140]: New session 14 of user core. Mar 17 20:45:02.911376 systemd[1]: Started session-14.scope. Mar 17 20:45:03.679228 sshd[3335]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:03.684953 systemd-logind[1140]: Session 14 logged out. Waiting for processes to exit. Mar 17 20:45:03.685187 systemd[1]: sshd@13-172.24.4.253:22-172.24.4.1:36316.service: Deactivated successfully. Mar 17 20:45:03.686699 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 20:45:03.688618 systemd-logind[1140]: Removed session 14. Mar 17 20:45:08.688878 systemd[1]: Started sshd@14-172.24.4.253:22-172.24.4.1:36664.service. Mar 17 20:45:10.042537 sshd[3348]: Accepted publickey for core from 172.24.4.1 port 36664 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:10.045202 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:10.056450 systemd[1]: Started session-15.scope. Mar 17 20:45:10.057839 systemd-logind[1140]: New session 15 of user core. Mar 17 20:45:10.868257 sshd[3348]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:10.878012 systemd[1]: sshd@14-172.24.4.253:22-172.24.4.1:36664.service: Deactivated successfully. Mar 17 20:45:10.880242 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 20:45:10.882361 systemd-logind[1140]: Session 15 logged out. Waiting for processes to exit. Mar 17 20:45:10.887119 systemd[1]: Started sshd@15-172.24.4.253:22-172.24.4.1:36676.service. Mar 17 20:45:10.890525 systemd-logind[1140]: Removed session 15. Mar 17 20:45:12.050783 sshd[3360]: Accepted publickey for core from 172.24.4.1 port 36676 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:12.052893 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:12.059457 systemd[1]: Started session-16.scope. Mar 17 20:45:12.060103 systemd-logind[1140]: New session 16 of user core. Mar 17 20:45:13.006001 sshd[3360]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:13.013420 systemd[1]: Started sshd@16-172.24.4.253:22-172.24.4.1:36692.service. Mar 17 20:45:13.016478 systemd[1]: sshd@15-172.24.4.253:22-172.24.4.1:36676.service: Deactivated successfully. Mar 17 20:45:13.019212 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 20:45:13.022974 systemd-logind[1140]: Session 16 logged out. Waiting for processes to exit. Mar 17 20:45:13.026332 systemd-logind[1140]: Removed session 16. Mar 17 20:45:14.204371 sshd[3369]: Accepted publickey for core from 172.24.4.1 port 36692 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:14.206257 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:14.216198 systemd-logind[1140]: New session 17 of user core. Mar 17 20:45:14.216926 systemd[1]: Started session-17.scope. Mar 17 20:45:16.093803 sshd[3369]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:16.099330 systemd[1]: sshd@16-172.24.4.253:22-172.24.4.1:36692.service: Deactivated successfully. Mar 17 20:45:16.101259 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 20:45:16.103353 systemd-logind[1140]: Session 17 logged out. Waiting for processes to exit. Mar 17 20:45:16.106735 systemd[1]: Started sshd@17-172.24.4.253:22-172.24.4.1:39372.service. Mar 17 20:45:16.109916 systemd-logind[1140]: Removed session 17. Mar 17 20:45:17.451090 sshd[3388]: Accepted publickey for core from 172.24.4.1 port 39372 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:17.452762 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:17.458032 systemd-logind[1140]: New session 18 of user core. Mar 17 20:45:17.458473 systemd[1]: Started session-18.scope. Mar 17 20:45:18.376271 sshd[3388]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:18.381237 systemd[1]: sshd@17-172.24.4.253:22-172.24.4.1:39372.service: Deactivated successfully. Mar 17 20:45:18.382851 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 20:45:18.385412 systemd-logind[1140]: Session 18 logged out. Waiting for processes to exit. Mar 17 20:45:18.388669 systemd[1]: Started sshd@18-172.24.4.253:22-172.24.4.1:39386.service. Mar 17 20:45:18.391721 systemd-logind[1140]: Removed session 18. Mar 17 20:45:19.735091 sshd[3398]: Accepted publickey for core from 172.24.4.1 port 39386 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:19.736993 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:19.743813 systemd[1]: Started session-19.scope. Mar 17 20:45:19.744466 systemd-logind[1140]: New session 19 of user core. Mar 17 20:45:20.440341 sshd[3398]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:20.447130 systemd[1]: sshd@18-172.24.4.253:22-172.24.4.1:39386.service: Deactivated successfully. Mar 17 20:45:20.448721 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 20:45:20.449004 systemd-logind[1140]: Session 19 logged out. Waiting for processes to exit. Mar 17 20:45:20.451830 systemd-logind[1140]: Removed session 19. Mar 17 20:45:25.449506 systemd[1]: Started sshd@19-172.24.4.253:22-172.24.4.1:37350.service. Mar 17 20:45:26.797814 sshd[3412]: Accepted publickey for core from 172.24.4.1 port 37350 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:26.800717 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:26.812310 systemd[1]: Started session-20.scope. Mar 17 20:45:26.813169 systemd-logind[1140]: New session 20 of user core. Mar 17 20:45:27.542935 sshd[3412]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:27.548284 systemd[1]: sshd@19-172.24.4.253:22-172.24.4.1:37350.service: Deactivated successfully. Mar 17 20:45:27.549937 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 20:45:27.551562 systemd-logind[1140]: Session 20 logged out. Waiting for processes to exit. Mar 17 20:45:27.553896 systemd-logind[1140]: Removed session 20. Mar 17 20:45:32.551785 systemd[1]: Started sshd@20-172.24.4.253:22-172.24.4.1:37362.service. Mar 17 20:45:34.010891 sshd[3426]: Accepted publickey for core from 172.24.4.1 port 37362 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:34.013873 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:34.027458 systemd-logind[1140]: New session 21 of user core. Mar 17 20:45:34.028994 systemd[1]: Started session-21.scope. Mar 17 20:45:34.811939 sshd[3426]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:34.816975 systemd[1]: sshd@20-172.24.4.253:22-172.24.4.1:37362.service: Deactivated successfully. Mar 17 20:45:34.818747 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 20:45:34.820157 systemd-logind[1140]: Session 21 logged out. Waiting for processes to exit. Mar 17 20:45:34.822504 systemd-logind[1140]: Removed session 21. Mar 17 20:45:39.822602 systemd[1]: Started sshd@21-172.24.4.253:22-172.24.4.1:54610.service. Mar 17 20:45:40.971437 sshd[3438]: Accepted publickey for core from 172.24.4.1 port 54610 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:40.974945 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:40.988802 systemd-logind[1140]: New session 22 of user core. Mar 17 20:45:40.989562 systemd[1]: Started session-22.scope. Mar 17 20:45:41.756047 sshd[3438]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:41.766736 systemd[1]: sshd@21-172.24.4.253:22-172.24.4.1:54610.service: Deactivated successfully. Mar 17 20:45:41.769168 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 20:45:41.772052 systemd-logind[1140]: Session 22 logged out. Waiting for processes to exit. Mar 17 20:45:41.777719 systemd[1]: Started sshd@22-172.24.4.253:22-172.24.4.1:54618.service. Mar 17 20:45:41.782478 systemd-logind[1140]: Removed session 22. Mar 17 20:45:43.160971 sshd[3450]: Accepted publickey for core from 172.24.4.1 port 54618 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:43.164011 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:43.176964 systemd[1]: Started session-23.scope. Mar 17 20:45:43.178834 systemd-logind[1140]: New session 23 of user core. Mar 17 20:45:45.254535 kubelet[1900]: I0317 20:45:45.254337 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cpjz8" podStartSLOduration=169.25422045 podStartE2EDuration="2m49.25422045s" podCreationTimestamp="2025-03-17 20:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:43:30.225712282 +0000 UTC m=+38.448708760" watchObservedRunningTime="2025-03-17 20:45:45.25422045 +0000 UTC m=+173.477216978" Mar 17 20:45:45.288483 systemd[1]: run-containerd-runc-k8s.io-ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293-runc.rlSIbn.mount: Deactivated successfully. Mar 17 20:45:45.293708 env[1151]: time="2025-03-17T20:45:45.293657850Z" level=info msg="StopContainer for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" with timeout 30 (s)" Mar 17 20:45:45.294257 env[1151]: time="2025-03-17T20:45:45.294205262Z" level=info msg="Stop container \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" with signal terminated" Mar 17 20:45:45.311476 systemd[1]: cri-containerd-9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534.scope: Deactivated successfully. Mar 17 20:45:45.316036 env[1151]: time="2025-03-17T20:45:45.315948082Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:45:45.322803 env[1151]: time="2025-03-17T20:45:45.322728963Z" level=info msg="StopContainer for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" with timeout 2 (s)" Mar 17 20:45:45.323263 env[1151]: time="2025-03-17T20:45:45.323222383Z" level=info msg="Stop container \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" with signal terminated" Mar 17 20:45:45.335405 systemd-networkd[983]: lxc_health: Link DOWN Mar 17 20:45:45.335414 systemd-networkd[983]: lxc_health: Lost carrier Mar 17 20:45:45.338660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534-rootfs.mount: Deactivated successfully. Mar 17 20:45:45.367522 env[1151]: time="2025-03-17T20:45:45.364736298Z" level=info msg="shim disconnected" id=9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534 Mar 17 20:45:45.367522 env[1151]: time="2025-03-17T20:45:45.364917869Z" level=warning msg="cleaning up after shim disconnected" id=9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534 namespace=k8s.io Mar 17 20:45:45.367522 env[1151]: time="2025-03-17T20:45:45.364936946Z" level=info msg="cleaning up dead shim" Mar 17 20:45:45.372973 systemd[1]: cri-containerd-ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293.scope: Deactivated successfully. Mar 17 20:45:45.373196 systemd[1]: cri-containerd-ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293.scope: Consumed 8.655s CPU time. Mar 17 20:45:45.381567 env[1151]: time="2025-03-17T20:45:45.381523848Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3501 runtime=io.containerd.runc.v2\n" Mar 17 20:45:45.387920 env[1151]: time="2025-03-17T20:45:45.387879207Z" level=info msg="StopContainer for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" returns successfully" Mar 17 20:45:45.388899 env[1151]: time="2025-03-17T20:45:45.388860937Z" level=info msg="StopPodSandbox for \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\"" Mar 17 20:45:45.388962 env[1151]: time="2025-03-17T20:45:45.388932172Z" level=info msg="Container to stop \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:45.391070 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7-shm.mount: Deactivated successfully. Mar 17 20:45:45.402587 systemd[1]: cri-containerd-b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7.scope: Deactivated successfully. Mar 17 20:45:45.412085 env[1151]: time="2025-03-17T20:45:45.410377932Z" level=info msg="shim disconnected" id=ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293 Mar 17 20:45:45.412085 env[1151]: time="2025-03-17T20:45:45.410454466Z" level=warning msg="cleaning up after shim disconnected" id=ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293 namespace=k8s.io Mar 17 20:45:45.412085 env[1151]: time="2025-03-17T20:45:45.410468012Z" level=info msg="cleaning up dead shim" Mar 17 20:45:45.430103 env[1151]: time="2025-03-17T20:45:45.430056922Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3541 runtime=io.containerd.runc.v2\n" Mar 17 20:45:45.435408 env[1151]: time="2025-03-17T20:45:45.435375466Z" level=info msg="StopContainer for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" returns successfully" Mar 17 20:45:45.436160 env[1151]: time="2025-03-17T20:45:45.436138164Z" level=info msg="StopPodSandbox for \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\"" Mar 17 20:45:45.436325 env[1151]: time="2025-03-17T20:45:45.436280483Z" level=info msg="Container to stop \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:45.436412 env[1151]: time="2025-03-17T20:45:45.436392133Z" level=info msg="Container to stop \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:45.436492 env[1151]: time="2025-03-17T20:45:45.436473235Z" level=info msg="Container to stop \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:45.436571 env[1151]: time="2025-03-17T20:45:45.436551453Z" level=info msg="Container to stop \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:45.436670 env[1151]: time="2025-03-17T20:45:45.436649217Z" level=info msg="Container to stop \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:45.437365 env[1151]: time="2025-03-17T20:45:45.437334508Z" level=info msg="shim disconnected" id=b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7 Mar 17 20:45:45.437752 env[1151]: time="2025-03-17T20:45:45.437713373Z" level=warning msg="cleaning up after shim disconnected" id=b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7 namespace=k8s.io Mar 17 20:45:45.437752 env[1151]: time="2025-03-17T20:45:45.437746615Z" level=info msg="cleaning up dead shim" Mar 17 20:45:45.443546 systemd[1]: cri-containerd-ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0.scope: Deactivated successfully. Mar 17 20:45:45.448687 env[1151]: time="2025-03-17T20:45:45.448652089Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3561 runtime=io.containerd.runc.v2\n" Mar 17 20:45:45.449068 env[1151]: time="2025-03-17T20:45:45.449027077Z" level=info msg="TearDown network for sandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" successfully" Mar 17 20:45:45.449068 env[1151]: time="2025-03-17T20:45:45.449052294Z" level=info msg="StopPodSandbox for \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" returns successfully" Mar 17 20:45:45.493559 env[1151]: time="2025-03-17T20:45:45.493499777Z" level=info msg="shim disconnected" id=ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0 Mar 17 20:45:45.493862 env[1151]: time="2025-03-17T20:45:45.493842964Z" level=warning msg="cleaning up after shim disconnected" id=ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0 namespace=k8s.io Mar 17 20:45:45.493940 env[1151]: time="2025-03-17T20:45:45.493923636Z" level=info msg="cleaning up dead shim" Mar 17 20:45:45.502425 env[1151]: time="2025-03-17T20:45:45.502371740Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3593 runtime=io.containerd.runc.v2\n" Mar 17 20:45:45.502739 env[1151]: time="2025-03-17T20:45:45.502712051Z" level=info msg="TearDown network for sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" successfully" Mar 17 20:45:45.502798 env[1151]: time="2025-03-17T20:45:45.502739382Z" level=info msg="StopPodSandbox for \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" returns successfully" Mar 17 20:45:45.533076 kubelet[1900]: I0317 20:45:45.527993 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqcg5\" (UniqueName: \"kubernetes.io/projected/378d1445-161d-4308-bf2e-31133d8a34c4-kube-api-access-gqcg5\") pod \"378d1445-161d-4308-bf2e-31133d8a34c4\" (UID: \"378d1445-161d-4308-bf2e-31133d8a34c4\") " Mar 17 20:45:45.533076 kubelet[1900]: I0317 20:45:45.528055 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/378d1445-161d-4308-bf2e-31133d8a34c4-cilium-config-path\") pod \"378d1445-161d-4308-bf2e-31133d8a34c4\" (UID: \"378d1445-161d-4308-bf2e-31133d8a34c4\") " Mar 17 20:45:45.533076 kubelet[1900]: I0317 20:45:45.531842 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378d1445-161d-4308-bf2e-31133d8a34c4-kube-api-access-gqcg5" (OuterVolumeSpecName: "kube-api-access-gqcg5") pod "378d1445-161d-4308-bf2e-31133d8a34c4" (UID: "378d1445-161d-4308-bf2e-31133d8a34c4"). InnerVolumeSpecName "kube-api-access-gqcg5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:45:45.534889 kubelet[1900]: I0317 20:45:45.534861 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378d1445-161d-4308-bf2e-31133d8a34c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "378d1445-161d-4308-bf2e-31133d8a34c4" (UID: "378d1445-161d-4308-bf2e-31133d8a34c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 20:45:45.568064 kubelet[1900]: I0317 20:45:45.568037 1900 scope.go:117] "RemoveContainer" containerID="9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534" Mar 17 20:45:45.571588 systemd[1]: Removed slice kubepods-besteffort-pod378d1445_161d_4308_bf2e_31133d8a34c4.slice. Mar 17 20:45:45.574577 env[1151]: time="2025-03-17T20:45:45.574545991Z" level=info msg="RemoveContainer for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\"" Mar 17 20:45:45.608990 env[1151]: time="2025-03-17T20:45:45.608895052Z" level=info msg="RemoveContainer for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" returns successfully" Mar 17 20:45:45.610900 kubelet[1900]: I0317 20:45:45.610168 1900 scope.go:117] "RemoveContainer" containerID="9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534" Mar 17 20:45:45.613240 env[1151]: time="2025-03-17T20:45:45.613028212Z" level=error msg="ContainerStatus for \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\": not found" Mar 17 20:45:45.615435 kubelet[1900]: E0317 20:45:45.615289 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\": not found" containerID="9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534" Mar 17 20:45:45.616313 kubelet[1900]: I0317 20:45:45.615956 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534"} err="failed to get container status \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b93ac2e0f6f27c83514e71e6e3740f8a67e5eba618f5dca4f45b454f2123534\": not found" Mar 17 20:45:45.616519 kubelet[1900]: I0317 20:45:45.616495 1900 scope.go:117] "RemoveContainer" containerID="ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293" Mar 17 20:45:45.623495 env[1151]: time="2025-03-17T20:45:45.623223237Z" level=info msg="RemoveContainer for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\"" Mar 17 20:45:45.629026 kubelet[1900]: I0317 20:45:45.628958 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-run\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629219 kubelet[1900]: I0317 20:45:45.629064 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-hostproc\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629299 kubelet[1900]: I0317 20:45:45.629203 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-lib-modules\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629393 kubelet[1900]: I0317 20:45:45.629296 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmz6v\" (UniqueName: \"kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-kube-api-access-dmz6v\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629393 kubelet[1900]: I0317 20:45:45.629366 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-bpf-maps\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629518 kubelet[1900]: I0317 20:45:45.629430 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-cgroup\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629518 kubelet[1900]: I0317 20:45:45.629491 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cni-path\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629690 kubelet[1900]: I0317 20:45:45.629574 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3810c7b9-cddb-487d-9002-e7997ea05e95-clustermesh-secrets\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629772 kubelet[1900]: I0317 20:45:45.629690 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-xtables-lock\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629838 kubelet[1900]: I0317 20:45:45.629763 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-kernel\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629900 kubelet[1900]: I0317 20:45:45.629841 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-config-path\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.629964 kubelet[1900]: I0317 20:45:45.629899 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-etc-cni-netd\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.630030 kubelet[1900]: I0317 20:45:45.629962 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-net\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.630094 kubelet[1900]: I0317 20:45:45.630039 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-hubble-tls\") pod \"3810c7b9-cddb-487d-9002-e7997ea05e95\" (UID: \"3810c7b9-cddb-487d-9002-e7997ea05e95\") " Mar 17 20:45:45.630274 kubelet[1900]: I0317 20:45:45.630157 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gqcg5\" (UniqueName: \"kubernetes.io/projected/378d1445-161d-4308-bf2e-31133d8a34c4-kube-api-access-gqcg5\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.630274 kubelet[1900]: I0317 20:45:45.630219 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/378d1445-161d-4308-bf2e-31133d8a34c4-cilium-config-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.631519 kubelet[1900]: I0317 20:45:45.631137 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cni-path" (OuterVolumeSpecName: "cni-path") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.631519 kubelet[1900]: I0317 20:45:45.631241 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.631519 kubelet[1900]: I0317 20:45:45.631337 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-hostproc" (OuterVolumeSpecName: "hostproc") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.631519 kubelet[1900]: I0317 20:45:45.631401 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.632655 kubelet[1900]: I0317 20:45:45.632551 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.633031 kubelet[1900]: I0317 20:45:45.632968 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.637254 env[1151]: time="2025-03-17T20:45:45.637078973Z" level=info msg="RemoveContainer for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" returns successfully" Mar 17 20:45:45.639918 kubelet[1900]: I0317 20:45:45.639872 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.640181 kubelet[1900]: I0317 20:45:45.640129 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.640529 kubelet[1900]: I0317 20:45:45.640500 1900 scope.go:117] "RemoveContainer" containerID="04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476" Mar 17 20:45:45.641854 kubelet[1900]: I0317 20:45:45.641768 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 20:45:45.641972 kubelet[1900]: I0317 20:45:45.641919 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.642043 kubelet[1900]: I0317 20:45:45.642003 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:45.648592 env[1151]: time="2025-03-17T20:45:45.648080648Z" level=info msg="RemoveContainer for \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\"" Mar 17 20:45:45.650396 kubelet[1900]: I0317 20:45:45.650305 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:45:45.651111 kubelet[1900]: I0317 20:45:45.651036 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-kube-api-access-dmz6v" (OuterVolumeSpecName: "kube-api-access-dmz6v") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "kube-api-access-dmz6v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:45:45.654428 kubelet[1900]: I0317 20:45:45.654372 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3810c7b9-cddb-487d-9002-e7997ea05e95-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3810c7b9-cddb-487d-9002-e7997ea05e95" (UID: "3810c7b9-cddb-487d-9002-e7997ea05e95"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 20:45:45.731031 kubelet[1900]: I0317 20:45:45.730926 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-cgroup\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.731873 kubelet[1900]: I0317 20:45:45.731803 1900 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cni-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.732133 kubelet[1900]: I0317 20:45:45.732074 1900 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3810c7b9-cddb-487d-9002-e7997ea05e95-clustermesh-secrets\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.732359 kubelet[1900]: I0317 20:45:45.732305 1900 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-xtables-lock\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.732581 kubelet[1900]: I0317 20:45:45.732528 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-kernel\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.732846 kubelet[1900]: I0317 20:45:45.732788 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-host-proc-sys-net\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.733050 kubelet[1900]: I0317 20:45:45.733020 1900 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-hubble-tls\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.733303 kubelet[1900]: I0317 20:45:45.733239 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-config-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.733535 kubelet[1900]: I0317 20:45:45.733482 1900 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-etc-cni-netd\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.733753 kubelet[1900]: I0317 20:45:45.733724 1900 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-hostproc\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.733959 kubelet[1900]: I0317 20:45:45.733932 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-cilium-run\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.734180 kubelet[1900]: I0317 20:45:45.734152 1900 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-lib-modules\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.734437 kubelet[1900]: I0317 20:45:45.734408 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmz6v\" (UniqueName: \"kubernetes.io/projected/3810c7b9-cddb-487d-9002-e7997ea05e95-kube-api-access-dmz6v\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.734678 kubelet[1900]: I0317 20:45:45.734616 1900 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3810c7b9-cddb-487d-9002-e7997ea05e95-bpf-maps\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:45.847097 env[1151]: time="2025-03-17T20:45:45.846783970Z" level=info msg="RemoveContainer for \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\" returns successfully" Mar 17 20:45:45.850326 kubelet[1900]: I0317 20:45:45.850266 1900 scope.go:117] "RemoveContainer" containerID="7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81" Mar 17 20:45:45.855776 env[1151]: time="2025-03-17T20:45:45.855704362Z" level=info msg="RemoveContainer for \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\"" Mar 17 20:45:45.888820 systemd[1]: Removed slice kubepods-burstable-pod3810c7b9_cddb_487d_9002_e7997ea05e95.slice. Mar 17 20:45:45.889059 systemd[1]: kubepods-burstable-pod3810c7b9_cddb_487d_9002_e7997ea05e95.slice: Consumed 8.758s CPU time. Mar 17 20:45:45.911964 kubelet[1900]: I0317 20:45:45.911913 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378d1445-161d-4308-bf2e-31133d8a34c4" path="/var/lib/kubelet/pods/378d1445-161d-4308-bf2e-31133d8a34c4/volumes" Mar 17 20:45:46.077464 env[1151]: time="2025-03-17T20:45:46.077386234Z" level=info msg="RemoveContainer for \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\" returns successfully" Mar 17 20:45:46.078162 kubelet[1900]: I0317 20:45:46.078088 1900 scope.go:117] "RemoveContainer" containerID="95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358" Mar 17 20:45:46.085446 env[1151]: time="2025-03-17T20:45:46.085385981Z" level=info msg="RemoveContainer for \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\"" Mar 17 20:45:46.097814 env[1151]: time="2025-03-17T20:45:46.097090919Z" level=info msg="RemoveContainer for \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\" returns successfully" Mar 17 20:45:46.099442 kubelet[1900]: I0317 20:45:46.099228 1900 scope.go:117] "RemoveContainer" containerID="8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543" Mar 17 20:45:46.106389 env[1151]: time="2025-03-17T20:45:46.106327486Z" level=info msg="RemoveContainer for \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\"" Mar 17 20:45:46.117369 env[1151]: time="2025-03-17T20:45:46.117309341Z" level=info msg="RemoveContainer for \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\" returns successfully" Mar 17 20:45:46.121051 kubelet[1900]: I0317 20:45:46.120815 1900 scope.go:117] "RemoveContainer" containerID="ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293" Mar 17 20:45:46.121779 env[1151]: time="2025-03-17T20:45:46.121413866Z" level=error msg="ContainerStatus for \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\": not found" Mar 17 20:45:46.122387 kubelet[1900]: E0317 20:45:46.122068 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\": not found" containerID="ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293" Mar 17 20:45:46.122387 kubelet[1900]: I0317 20:45:46.122129 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293"} err="failed to get container status \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\": rpc error: code = NotFound desc = an error occurred when try to find container \"ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293\": not found" Mar 17 20:45:46.122387 kubelet[1900]: I0317 20:45:46.122183 1900 scope.go:117] "RemoveContainer" containerID="04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476" Mar 17 20:45:46.123083 env[1151]: time="2025-03-17T20:45:46.122907982Z" level=error msg="ContainerStatus for \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\": not found" Mar 17 20:45:46.123678 kubelet[1900]: E0317 20:45:46.123399 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\": not found" containerID="04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476" Mar 17 20:45:46.123678 kubelet[1900]: I0317 20:45:46.123452 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476"} err="failed to get container status \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\": rpc error: code = NotFound desc = an error occurred when try to find container \"04dadc079bfb5451b0940ef5e369b06a3a0326f04c191d23d0d6bcea4238e476\": not found" Mar 17 20:45:46.123678 kubelet[1900]: I0317 20:45:46.123487 1900 scope.go:117] "RemoveContainer" containerID="7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81" Mar 17 20:45:46.124079 env[1151]: time="2025-03-17T20:45:46.123877309Z" level=error msg="ContainerStatus for \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\": not found" Mar 17 20:45:46.126145 kubelet[1900]: E0317 20:45:46.124354 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\": not found" containerID="7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81" Mar 17 20:45:46.126716 kubelet[1900]: I0317 20:45:46.126400 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81"} err="failed to get container status \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e688391bfa05dc97ed9afc0ab9d7f7131dc5e19791c8e032b86c512784b1b81\": not found" Mar 17 20:45:46.126716 kubelet[1900]: I0317 20:45:46.126467 1900 scope.go:117] "RemoveContainer" containerID="95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358" Mar 17 20:45:46.126948 env[1151]: time="2025-03-17T20:45:46.126833570Z" level=error msg="ContainerStatus for \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\": not found" Mar 17 20:45:46.127485 kubelet[1900]: E0317 20:45:46.127226 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\": not found" containerID="95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358" Mar 17 20:45:46.127485 kubelet[1900]: I0317 20:45:46.127274 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358"} err="failed to get container status \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\": rpc error: code = NotFound desc = an error occurred when try to find container \"95386a1124c656978ba51c35981cef4e70f5f51e5d68572fa0ff7c6cced2d358\": not found" Mar 17 20:45:46.127485 kubelet[1900]: I0317 20:45:46.127307 1900 scope.go:117] "RemoveContainer" containerID="8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543" Mar 17 20:45:46.127863 env[1151]: time="2025-03-17T20:45:46.127725420Z" level=error msg="ContainerStatus for \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\": not found" Mar 17 20:45:46.128244 kubelet[1900]: E0317 20:45:46.128086 1900 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\": not found" containerID="8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543" Mar 17 20:45:46.128244 kubelet[1900]: I0317 20:45:46.128135 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543"} err="failed to get container status \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\": rpc error: code = NotFound desc = an error occurred when try to find container \"8af2092c0ff405dc977775a45dfed8dfa5fe6df67706b44d83d0049bc1734543\": not found" Mar 17 20:45:46.281170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebe9abd3530d7562f10030f96346823852c026a4ca8a0ac76fed78f62370d293-rootfs.mount: Deactivated successfully. Mar 17 20:45:46.281769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0-rootfs.mount: Deactivated successfully. Mar 17 20:45:46.282125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0-shm.mount: Deactivated successfully. Mar 17 20:45:46.282529 systemd[1]: var-lib-kubelet-pods-3810c7b9\x2dcddb\x2d487d\x2d9002\x2de7997ea05e95-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 20:45:46.282943 systemd[1]: var-lib-kubelet-pods-3810c7b9\x2dcddb\x2d487d\x2d9002\x2de7997ea05e95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmz6v.mount: Deactivated successfully. Mar 17 20:45:46.283286 systemd[1]: var-lib-kubelet-pods-3810c7b9\x2dcddb\x2d487d\x2d9002\x2de7997ea05e95-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 20:45:46.283672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7-rootfs.mount: Deactivated successfully. Mar 17 20:45:46.284020 systemd[1]: var-lib-kubelet-pods-378d1445\x2d161d\x2d4308\x2dbf2e\x2d31133d8a34c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgqcg5.mount: Deactivated successfully. Mar 17 20:45:47.043107 kubelet[1900]: E0317 20:45:47.042957 1900 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:45:47.366541 sshd[3450]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:47.375731 systemd[1]: Started sshd@23-172.24.4.253:22-172.24.4.1:34080.service. Mar 17 20:45:47.376979 systemd[1]: sshd@22-172.24.4.253:22-172.24.4.1:54618.service: Deactivated successfully. Mar 17 20:45:47.380730 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 20:45:47.380940 systemd[1]: session-23.scope: Consumed 1.038s CPU time. Mar 17 20:45:47.384597 systemd-logind[1140]: Session 23 logged out. Waiting for processes to exit. Mar 17 20:45:47.387157 systemd-logind[1140]: Removed session 23. Mar 17 20:45:47.906759 kubelet[1900]: I0317 20:45:47.906707 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3810c7b9-cddb-487d-9002-e7997ea05e95" path="/var/lib/kubelet/pods/3810c7b9-cddb-487d-9002-e7997ea05e95/volumes" Mar 17 20:45:48.765849 sshd[3610]: Accepted publickey for core from 172.24.4.1 port 34080 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:48.768285 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:48.778707 systemd-logind[1140]: New session 24 of user core. Mar 17 20:45:48.779532 systemd[1]: Started session-24.scope. Mar 17 20:45:49.935658 kubelet[1900]: I0317 20:45:49.935595 1900 memory_manager.go:355] "RemoveStaleState removing state" podUID="3810c7b9-cddb-487d-9002-e7997ea05e95" containerName="cilium-agent" Mar 17 20:45:49.936066 kubelet[1900]: I0317 20:45:49.936051 1900 memory_manager.go:355] "RemoveStaleState removing state" podUID="378d1445-161d-4308-bf2e-31133d8a34c4" containerName="cilium-operator" Mar 17 20:45:49.942447 systemd[1]: Created slice kubepods-burstable-podf5617a4e_b756_4bcb_921c_7572f6cea0a7.slice. Mar 17 20:45:50.063031 kubelet[1900]: I0317 20:45:50.062932 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-clustermesh-secrets\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063249 kubelet[1900]: I0317 20:45:50.063041 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-ipsec-secrets\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063249 kubelet[1900]: I0317 20:45:50.063121 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-bpf-maps\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063249 kubelet[1900]: I0317 20:45:50.063187 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-cgroup\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063249 kubelet[1900]: I0317 20:45:50.063224 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-run\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063468 kubelet[1900]: I0317 20:45:50.063285 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cni-path\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063468 kubelet[1900]: I0317 20:45:50.063318 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-xtables-lock\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063468 kubelet[1900]: I0317 20:45:50.063386 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-kernel\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063620 kubelet[1900]: I0317 20:45:50.063472 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hostproc\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063620 kubelet[1900]: I0317 20:45:50.063511 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-lib-modules\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063620 kubelet[1900]: I0317 20:45:50.063571 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-config-path\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063620 kubelet[1900]: I0317 20:45:50.063604 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hubble-tls\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063861 kubelet[1900]: I0317 20:45:50.063675 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-net\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063861 kubelet[1900]: I0317 20:45:50.063710 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-etc-cni-netd\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.063861 kubelet[1900]: I0317 20:45:50.063776 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdbnm\" (UniqueName: \"kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-kube-api-access-wdbnm\") pod \"cilium-kgm9f\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " pod="kube-system/cilium-kgm9f" Mar 17 20:45:50.123116 sshd[3610]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:50.134374 systemd[1]: Started sshd@24-172.24.4.253:22-172.24.4.1:34094.service. Mar 17 20:45:50.137126 systemd[1]: sshd@23-172.24.4.253:22-172.24.4.1:34080.service: Deactivated successfully. Mar 17 20:45:50.141167 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 20:45:50.146557 systemd-logind[1140]: Session 24 logged out. Waiting for processes to exit. Mar 17 20:45:50.151522 systemd-logind[1140]: Removed session 24. Mar 17 20:45:50.246807 env[1151]: time="2025-03-17T20:45:50.246696527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgm9f,Uid:f5617a4e-b756-4bcb-921c-7572f6cea0a7,Namespace:kube-system,Attempt:0,}" Mar 17 20:45:50.270287 env[1151]: time="2025-03-17T20:45:50.270179821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:45:50.270434 env[1151]: time="2025-03-17T20:45:50.270303494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:45:50.270434 env[1151]: time="2025-03-17T20:45:50.270336145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:45:50.270666 env[1151]: time="2025-03-17T20:45:50.270600924Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba pid=3634 runtime=io.containerd.runc.v2 Mar 17 20:45:50.281960 systemd[1]: Started cri-containerd-b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba.scope. Mar 17 20:45:50.307475 env[1151]: time="2025-03-17T20:45:50.307420298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgm9f,Uid:f5617a4e-b756-4bcb-921c-7572f6cea0a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\"" Mar 17 20:45:50.312050 env[1151]: time="2025-03-17T20:45:50.311989945Z" level=info msg="CreateContainer within sandbox \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:45:50.328086 env[1151]: time="2025-03-17T20:45:50.328025671Z" level=info msg="CreateContainer within sandbox \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\"" Mar 17 20:45:50.328808 env[1151]: time="2025-03-17T20:45:50.328768420Z" level=info msg="StartContainer for \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\"" Mar 17 20:45:50.346194 systemd[1]: Started cri-containerd-c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672.scope. Mar 17 20:45:50.357218 systemd[1]: cri-containerd-c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672.scope: Deactivated successfully. Mar 17 20:45:50.378420 env[1151]: time="2025-03-17T20:45:50.378348130Z" level=info msg="shim disconnected" id=c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672 Mar 17 20:45:50.378420 env[1151]: time="2025-03-17T20:45:50.378404948Z" level=warning msg="cleaning up after shim disconnected" id=c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672 namespace=k8s.io Mar 17 20:45:50.378420 env[1151]: time="2025-03-17T20:45:50.378416149Z" level=info msg="cleaning up dead shim" Mar 17 20:45:50.385656 env[1151]: time="2025-03-17T20:45:50.385576616Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3691 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T20:45:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 20:45:50.386014 env[1151]: time="2025-03-17T20:45:50.385909293Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Mar 17 20:45:50.388452 env[1151]: time="2025-03-17T20:45:50.388408260Z" level=error msg="Failed to pipe stdout of container \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\"" error="reading from a closed fifo" Mar 17 20:45:50.388602 env[1151]: time="2025-03-17T20:45:50.388547613Z" level=error msg="Failed to pipe stderr of container \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\"" error="reading from a closed fifo" Mar 17 20:45:50.392410 env[1151]: time="2025-03-17T20:45:50.392346017Z" level=error msg="StartContainer for \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 20:45:50.392656 kubelet[1900]: E0317 20:45:50.392606 1900 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672" Mar 17 20:45:50.392805 kubelet[1900]: E0317 20:45:50.392773 1900 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 20:45:50.392805 kubelet[1900]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 20:45:50.392805 kubelet[1900]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 20:45:50.392805 kubelet[1900]: rm /hostbin/cilium-mount Mar 17 20:45:50.392935 kubelet[1900]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdbnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kgm9f_kube-system(f5617a4e-b756-4bcb-921c-7572f6cea0a7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 20:45:50.392935 kubelet[1900]: > logger="UnhandledError" Mar 17 20:45:50.394646 kubelet[1900]: E0317 20:45:50.394563 1900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kgm9f" podUID="f5617a4e-b756-4bcb-921c-7572f6cea0a7" Mar 17 20:45:50.609229 env[1151]: time="2025-03-17T20:45:50.609136212Z" level=info msg="CreateContainer within sandbox \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 20:45:50.646138 env[1151]: time="2025-03-17T20:45:50.646047289Z" level=info msg="CreateContainer within sandbox \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\"" Mar 17 20:45:50.648529 env[1151]: time="2025-03-17T20:45:50.648424045Z" level=info msg="StartContainer for \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\"" Mar 17 20:45:50.688296 systemd[1]: Started cri-containerd-b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31.scope. Mar 17 20:45:50.699322 systemd[1]: cri-containerd-b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31.scope: Deactivated successfully. Mar 17 20:45:50.699556 systemd[1]: Stopped cri-containerd-b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31.scope. Mar 17 20:45:50.709865 env[1151]: time="2025-03-17T20:45:50.709819533Z" level=info msg="shim disconnected" id=b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31 Mar 17 20:45:50.710086 env[1151]: time="2025-03-17T20:45:50.710066628Z" level=warning msg="cleaning up after shim disconnected" id=b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31 namespace=k8s.io Mar 17 20:45:50.710181 env[1151]: time="2025-03-17T20:45:50.710164893Z" level=info msg="cleaning up dead shim" Mar 17 20:45:50.717534 env[1151]: time="2025-03-17T20:45:50.717478028Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3728 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T20:45:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 20:45:50.717784 env[1151]: time="2025-03-17T20:45:50.717733299Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Mar 17 20:45:50.717966 env[1151]: time="2025-03-17T20:45:50.717927465Z" level=error msg="Failed to pipe stdout of container \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\"" error="reading from a closed fifo" Mar 17 20:45:50.718721 env[1151]: time="2025-03-17T20:45:50.718675243Z" level=error msg="Failed to pipe stderr of container \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\"" error="reading from a closed fifo" Mar 17 20:45:50.722909 env[1151]: time="2025-03-17T20:45:50.722866759Z" level=error msg="StartContainer for \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 20:45:50.723216 kubelet[1900]: E0317 20:45:50.723175 1900 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31" Mar 17 20:45:50.723354 kubelet[1900]: E0317 20:45:50.723331 1900 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 20:45:50.723354 kubelet[1900]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 20:45:50.723354 kubelet[1900]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 20:45:50.723354 kubelet[1900]: rm /hostbin/cilium-mount Mar 17 20:45:50.723354 kubelet[1900]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdbnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kgm9f_kube-system(f5617a4e-b756-4bcb-921c-7572f6cea0a7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 20:45:50.723354 kubelet[1900]: > logger="UnhandledError" Mar 17 20:45:50.724818 kubelet[1900]: E0317 20:45:50.724769 1900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kgm9f" podUID="f5617a4e-b756-4bcb-921c-7572f6cea0a7" Mar 17 20:45:51.606066 kubelet[1900]: I0317 20:45:51.605997 1900 scope.go:117] "RemoveContainer" containerID="c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672" Mar 17 20:45:51.608271 kubelet[1900]: I0317 20:45:51.607703 1900 scope.go:117] "RemoveContainer" containerID="c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672" Mar 17 20:45:51.610853 env[1151]: time="2025-03-17T20:45:51.610751991Z" level=info msg="RemoveContainer for \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\"" Mar 17 20:45:51.615730 env[1151]: time="2025-03-17T20:45:51.615604800Z" level=info msg="RemoveContainer for \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\"" Mar 17 20:45:51.615902 env[1151]: time="2025-03-17T20:45:51.615822230Z" level=error msg="RemoveContainer for \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\" failed" error="failed to set removing state for container \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\": container is already in removing state" Mar 17 20:45:51.616621 kubelet[1900]: E0317 20:45:51.616563 1900 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\": container is already in removing state" containerID="c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672" Mar 17 20:45:51.616967 kubelet[1900]: E0317 20:45:51.616915 1900 kuberuntime_container.go:897] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\": container is already in removing state; Skipping pod \"cilium-kgm9f_kube-system(f5617a4e-b756-4bcb-921c-7572f6cea0a7)\"" logger="UnhandledError" Mar 17 20:45:51.618933 kubelet[1900]: E0317 20:45:51.618878 1900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kgm9f_kube-system(f5617a4e-b756-4bcb-921c-7572f6cea0a7)\"" pod="kube-system/cilium-kgm9f" podUID="f5617a4e-b756-4bcb-921c-7572f6cea0a7" Mar 17 20:45:51.628081 env[1151]: time="2025-03-17T20:45:51.627995708Z" level=info msg="RemoveContainer for \"c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672\" returns successfully" Mar 17 20:45:51.735911 sshd[3620]: Accepted publickey for core from 172.24.4.1 port 34094 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:51.738921 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:51.749289 systemd-logind[1140]: New session 25 of user core. Mar 17 20:45:51.750415 systemd[1]: Started session-25.scope. Mar 17 20:45:51.931856 env[1151]: time="2025-03-17T20:45:51.930308350Z" level=info msg="StopPodSandbox for \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\"" Mar 17 20:45:51.931856 env[1151]: time="2025-03-17T20:45:51.930486796Z" level=info msg="TearDown network for sandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" successfully" Mar 17 20:45:51.931856 env[1151]: time="2025-03-17T20:45:51.930562468Z" level=info msg="StopPodSandbox for \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" returns successfully" Mar 17 20:45:51.933269 env[1151]: time="2025-03-17T20:45:51.933194656Z" level=info msg="RemovePodSandbox for \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\"" Mar 17 20:45:51.933435 env[1151]: time="2025-03-17T20:45:51.933278594Z" level=info msg="Forcibly stopping sandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\"" Mar 17 20:45:51.933521 env[1151]: time="2025-03-17T20:45:51.933456419Z" level=info msg="TearDown network for sandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" successfully" Mar 17 20:45:51.939655 env[1151]: time="2025-03-17T20:45:51.939565775Z" level=info msg="RemovePodSandbox \"b40344e51b7aa2b2708e13d1f2339c2cea67761ce4fb2d51611082a755de87c7\" returns successfully" Mar 17 20:45:51.940579 env[1151]: time="2025-03-17T20:45:51.940526996Z" level=info msg="StopPodSandbox for \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\"" Mar 17 20:45:51.941155 env[1151]: time="2025-03-17T20:45:51.941064137Z" level=info msg="TearDown network for sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" successfully" Mar 17 20:45:51.941385 env[1151]: time="2025-03-17T20:45:51.941314469Z" level=info msg="StopPodSandbox for \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" returns successfully" Mar 17 20:45:51.942323 env[1151]: time="2025-03-17T20:45:51.942236956Z" level=info msg="RemovePodSandbox for \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\"" Mar 17 20:45:51.942567 env[1151]: time="2025-03-17T20:45:51.942491014Z" level=info msg="Forcibly stopping sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\"" Mar 17 20:45:51.942910 env[1151]: time="2025-03-17T20:45:51.942860610Z" level=info msg="TearDown network for sandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" successfully" Mar 17 20:45:51.948818 env[1151]: time="2025-03-17T20:45:51.948750764Z" level=info msg="RemovePodSandbox \"ce89df2fd5e0a785fcbd7064e99b63f539b4ad4829b74decb4ba0f4db11208b0\" returns successfully" Mar 17 20:45:52.044575 kubelet[1900]: E0317 20:45:52.044460 1900 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:45:52.426330 sshd[3620]: pam_unix(sshd:session): session closed for user core Mar 17 20:45:52.438099 systemd[1]: Started sshd@25-172.24.4.253:22-172.24.4.1:34098.service. Mar 17 20:45:52.438771 systemd[1]: sshd@24-172.24.4.253:22-172.24.4.1:34094.service: Deactivated successfully. Mar 17 20:45:52.439554 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 20:45:52.447912 systemd-logind[1140]: Session 25 logged out. Waiting for processes to exit. Mar 17 20:45:52.449439 systemd-logind[1140]: Removed session 25. Mar 17 20:45:52.618614 env[1151]: time="2025-03-17T20:45:52.611996253Z" level=info msg="StopPodSandbox for \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\"" Mar 17 20:45:52.618614 env[1151]: time="2025-03-17T20:45:52.612161344Z" level=info msg="Container to stop \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:45:52.617016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba-shm.mount: Deactivated successfully. Mar 17 20:45:52.638595 systemd[1]: cri-containerd-b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba.scope: Deactivated successfully. Mar 17 20:45:52.700809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba-rootfs.mount: Deactivated successfully. Mar 17 20:45:52.718984 env[1151]: time="2025-03-17T20:45:52.718898191Z" level=info msg="shim disconnected" id=b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba Mar 17 20:45:52.718984 env[1151]: time="2025-03-17T20:45:52.718946383Z" level=warning msg="cleaning up after shim disconnected" id=b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba namespace=k8s.io Mar 17 20:45:52.718984 env[1151]: time="2025-03-17T20:45:52.718969226Z" level=info msg="cleaning up dead shim" Mar 17 20:45:52.735043 env[1151]: time="2025-03-17T20:45:52.734968445Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3773 runtime=io.containerd.runc.v2\n" Mar 17 20:45:52.735903 env[1151]: time="2025-03-17T20:45:52.735843493Z" level=info msg="TearDown network for sandbox \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\" successfully" Mar 17 20:45:52.736121 env[1151]: time="2025-03-17T20:45:52.736072765Z" level=info msg="StopPodSandbox for \"b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba\" returns successfully" Mar 17 20:45:52.884193 kubelet[1900]: I0317 20:45:52.884061 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-cgroup\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884226 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884295 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hubble-tls\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884326 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-run\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884750 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-lib-modules\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884777 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-net\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884820 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cni-path\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884847 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-etc-cni-netd\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.884913 kubelet[1900]: I0317 20:45:52.884904 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-config-path\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.884935 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-ipsec-secrets\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.884959 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-kernel\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885000 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-xtables-lock\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885026 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-clustermesh-secrets\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885050 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdbnm\" (UniqueName: \"kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-kube-api-access-wdbnm\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885106 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-bpf-maps\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885129 1900 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hostproc\") pod \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\" (UID: \"f5617a4e-b756-4bcb-921c-7572f6cea0a7\") " Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885196 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-cgroup\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885244 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hostproc" (OuterVolumeSpecName: "hostproc") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885271 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885290 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885327 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885349 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cni-path" (OuterVolumeSpecName: "cni-path") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.885454 kubelet[1900]: I0317 20:45:52.885368 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.890004 kubelet[1900]: I0317 20:45:52.888530 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 20:45:52.890004 kubelet[1900]: I0317 20:45:52.889575 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.890004 kubelet[1900]: I0317 20:45:52.889725 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.890458 kubelet[1900]: I0317 20:45:52.890419 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 20:45:52.894433 systemd[1]: var-lib-kubelet-pods-f5617a4e\x2db756\x2d4bcb\x2d921c\x2d7572f6cea0a7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 20:45:52.900484 kubelet[1900]: I0317 20:45:52.900391 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:45:52.906588 systemd[1]: var-lib-kubelet-pods-f5617a4e\x2db756\x2d4bcb\x2d921c\x2d7572f6cea0a7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 20:45:52.911032 kubelet[1900]: I0317 20:45:52.910977 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 20:45:52.913578 systemd[1]: var-lib-kubelet-pods-f5617a4e\x2db756\x2d4bcb\x2d921c\x2d7572f6cea0a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdbnm.mount: Deactivated successfully. Mar 17 20:45:52.917807 kubelet[1900]: I0317 20:45:52.917617 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-kube-api-access-wdbnm" (OuterVolumeSpecName: "kube-api-access-wdbnm") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "kube-api-access-wdbnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 20:45:52.921775 kubelet[1900]: I0317 20:45:52.921717 1900 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f5617a4e-b756-4bcb-921c-7572f6cea0a7" (UID: "f5617a4e-b756-4bcb-921c-7572f6cea0a7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 20:45:52.986614 kubelet[1900]: I0317 20:45:52.986444 1900 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cni-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.986614 kubelet[1900]: I0317 20:45:52.986501 1900 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-etc-cni-netd\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.986614 kubelet[1900]: I0317 20:45:52.986530 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-config-path\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.986614 kubelet[1900]: I0317 20:45:52.986555 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-ipsec-secrets\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.986614 kubelet[1900]: I0317 20:45:52.986579 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-kernel\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.989827 kubelet[1900]: I0317 20:45:52.989698 1900 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wdbnm\" (UniqueName: \"kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-kube-api-access-wdbnm\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.989827 kubelet[1900]: I0317 20:45:52.989783 1900 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-bpf-maps\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989849 1900 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-xtables-lock\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989882 1900 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5617a4e-b756-4bcb-921c-7572f6cea0a7-clustermesh-secrets\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989905 1900 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hostproc\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989930 1900 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5617a4e-b756-4bcb-921c-7572f6cea0a7-hubble-tls\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989951 1900 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-cilium-run\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989973 1900 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-lib-modules\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:52.990038 kubelet[1900]: I0317 20:45:52.989994 1900 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5617a4e-b756-4bcb-921c-7572f6cea0a7-host-proc-sys-net\") on node \"ci-3510-3-7-0-2f3ee5d9b1.novalocal\" DevicePath \"\"" Mar 17 20:45:53.487202 kubelet[1900]: W0317 20:45:53.487089 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5617a4e_b756_4bcb_921c_7572f6cea0a7.slice/cri-containerd-c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672.scope WatchSource:0}: container "c7c8eaf574cc77720d62c26821a537437672b9691356a94605be42bb9c30d672" in namespace "k8s.io": not found Mar 17 20:45:53.615530 systemd[1]: var-lib-kubelet-pods-f5617a4e\x2db756\x2d4bcb\x2d921c\x2d7572f6cea0a7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 20:45:53.622051 kubelet[1900]: I0317 20:45:53.622000 1900 scope.go:117] "RemoveContainer" containerID="b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31" Mar 17 20:45:53.626173 env[1151]: time="2025-03-17T20:45:53.626082337Z" level=info msg="RemoveContainer for \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\"" Mar 17 20:45:53.634559 env[1151]: time="2025-03-17T20:45:53.634491532Z" level=info msg="RemoveContainer for \"b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31\" returns successfully" Mar 17 20:45:53.637945 systemd[1]: Removed slice kubepods-burstable-podf5617a4e_b756_4bcb_921c_7572f6cea0a7.slice. Mar 17 20:45:53.743646 kubelet[1900]: I0317 20:45:53.743535 1900 memory_manager.go:355] "RemoveStaleState removing state" podUID="f5617a4e-b756-4bcb-921c-7572f6cea0a7" containerName="mount-cgroup" Mar 17 20:45:53.743822 kubelet[1900]: I0317 20:45:53.743799 1900 memory_manager.go:355] "RemoveStaleState removing state" podUID="f5617a4e-b756-4bcb-921c-7572f6cea0a7" containerName="mount-cgroup" Mar 17 20:45:53.750527 systemd[1]: Created slice kubepods-burstable-pod73932885_198b_480c_a08b_4d4cc5294346.slice. Mar 17 20:45:53.894194 sshd[3752]: Accepted publickey for core from 172.24.4.1 port 34098 ssh2: RSA SHA256:C5VjSwTx1+F/qgwhITvi7SwdkRj7iWk/sWX2LxyYuws Mar 17 20:45:53.895873 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:45:53.903275 systemd[1]: Started session-26.scope. Mar 17 20:45:53.903455 systemd-logind[1140]: New session 26 of user core. Mar 17 20:45:53.906149 kubelet[1900]: I0317 20:45:53.906088 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-hostproc\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906136 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73932885-198b-480c-a08b-4d4cc5294346-clustermesh-secrets\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906183 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73932885-198b-480c-a08b-4d4cc5294346-hubble-tls\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906206 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-cilium-run\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906246 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-etc-cni-netd\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906269 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-xtables-lock\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906333 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-bpf-maps\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906361 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-cilium-cgroup\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906383 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-host-proc-sys-net\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906430 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24lf\" (UniqueName: \"kubernetes.io/projected/73932885-198b-480c-a08b-4d4cc5294346-kube-api-access-r24lf\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906464 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-host-proc-sys-kernel\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906524 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73932885-198b-480c-a08b-4d4cc5294346-cilium-config-path\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.906583 kubelet[1900]: I0317 20:45:53.906560 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73932885-198b-480c-a08b-4d4cc5294346-cilium-ipsec-secrets\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.907239 kubelet[1900]: I0317 20:45:53.906611 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-cni-path\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.907239 kubelet[1900]: I0317 20:45:53.906689 1900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73932885-198b-480c-a08b-4d4cc5294346-lib-modules\") pod \"cilium-2vw5w\" (UID: \"73932885-198b-480c-a08b-4d4cc5294346\") " pod="kube-system/cilium-2vw5w" Mar 17 20:45:53.911543 kubelet[1900]: I0317 20:45:53.911503 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5617a4e-b756-4bcb-921c-7572f6cea0a7" path="/var/lib/kubelet/pods/f5617a4e-b756-4bcb-921c-7572f6cea0a7/volumes" Mar 17 20:45:54.054461 env[1151]: time="2025-03-17T20:45:54.054348790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vw5w,Uid:73932885-198b-480c-a08b-4d4cc5294346,Namespace:kube-system,Attempt:0,}" Mar 17 20:45:54.153867 env[1151]: time="2025-03-17T20:45:54.153680271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:45:54.154095 env[1151]: time="2025-03-17T20:45:54.153917366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:45:54.154206 env[1151]: time="2025-03-17T20:45:54.154077017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:45:54.154970 env[1151]: time="2025-03-17T20:45:54.154883916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166 pid=3802 runtime=io.containerd.runc.v2 Mar 17 20:45:54.192300 systemd[1]: Started cri-containerd-c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166.scope. Mar 17 20:45:54.230728 env[1151]: time="2025-03-17T20:45:54.230686687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vw5w,Uid:73932885-198b-480c-a08b-4d4cc5294346,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\"" Mar 17 20:45:54.233454 env[1151]: time="2025-03-17T20:45:54.233396209Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:45:54.249958 env[1151]: time="2025-03-17T20:45:54.249907596Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216\"" Mar 17 20:45:54.250886 env[1151]: time="2025-03-17T20:45:54.250855771Z" level=info msg="StartContainer for \"2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216\"" Mar 17 20:45:54.266863 systemd[1]: Started cri-containerd-2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216.scope. Mar 17 20:45:54.300134 env[1151]: time="2025-03-17T20:45:54.300085210Z" level=info msg="StartContainer for \"2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216\" returns successfully" Mar 17 20:45:54.307640 systemd[1]: cri-containerd-2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216.scope: Deactivated successfully. Mar 17 20:45:54.550765 env[1151]: time="2025-03-17T20:45:54.550599195Z" level=info msg="shim disconnected" id=2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216 Mar 17 20:45:54.550765 env[1151]: time="2025-03-17T20:45:54.550727376Z" level=warning msg="cleaning up after shim disconnected" id=2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216 namespace=k8s.io Mar 17 20:45:54.550765 env[1151]: time="2025-03-17T20:45:54.550751301Z" level=info msg="cleaning up dead shim" Mar 17 20:45:54.577149 env[1151]: time="2025-03-17T20:45:54.576565835Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3893 runtime=io.containerd.runc.v2\n" Mar 17 20:45:54.631499 env[1151]: time="2025-03-17T20:45:54.631426165Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:45:54.656249 env[1151]: time="2025-03-17T20:45:54.656199689Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372\"" Mar 17 20:45:54.657097 env[1151]: time="2025-03-17T20:45:54.657072472Z" level=info msg="StartContainer for \"2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372\"" Mar 17 20:45:54.678930 systemd[1]: Started cri-containerd-2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372.scope. Mar 17 20:45:54.704386 env[1151]: time="2025-03-17T20:45:54.704339175Z" level=info msg="StartContainer for \"2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372\" returns successfully" Mar 17 20:45:54.709338 systemd[1]: cri-containerd-2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372.scope: Deactivated successfully. Mar 17 20:45:54.737458 env[1151]: time="2025-03-17T20:45:54.737399801Z" level=info msg="shim disconnected" id=2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372 Mar 17 20:45:54.737458 env[1151]: time="2025-03-17T20:45:54.737447972Z" level=warning msg="cleaning up after shim disconnected" id=2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372 namespace=k8s.io Mar 17 20:45:54.737458 env[1151]: time="2025-03-17T20:45:54.737461497Z" level=info msg="cleaning up dead shim" Mar 17 20:45:54.744445 env[1151]: time="2025-03-17T20:45:54.744404890Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3955 runtime=io.containerd.runc.v2\n" Mar 17 20:45:55.617105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372-rootfs.mount: Deactivated successfully. Mar 17 20:45:55.641525 env[1151]: time="2025-03-17T20:45:55.641352583Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:45:55.672194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436393394.mount: Deactivated successfully. Mar 17 20:45:55.690016 env[1151]: time="2025-03-17T20:45:55.689919534Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30\"" Mar 17 20:45:55.691523 env[1151]: time="2025-03-17T20:45:55.691462028Z" level=info msg="StartContainer for \"014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30\"" Mar 17 20:45:55.722847 kubelet[1900]: I0317 20:45:55.722706 1900 setters.go:602] "Node became not ready" node="ci-3510-3-7-0-2f3ee5d9b1.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T20:45:55Z","lastTransitionTime":"2025-03-17T20:45:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 20:45:55.748535 systemd[1]: Started cri-containerd-014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30.scope. Mar 17 20:45:55.781614 systemd[1]: cri-containerd-014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30.scope: Deactivated successfully. Mar 17 20:45:55.788577 env[1151]: time="2025-03-17T20:45:55.788529386Z" level=info msg="StartContainer for \"014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30\" returns successfully" Mar 17 20:45:55.813942 env[1151]: time="2025-03-17T20:45:55.813902312Z" level=info msg="shim disconnected" id=014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30 Mar 17 20:45:55.814197 env[1151]: time="2025-03-17T20:45:55.814177540Z" level=warning msg="cleaning up after shim disconnected" id=014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30 namespace=k8s.io Mar 17 20:45:55.814275 env[1151]: time="2025-03-17T20:45:55.814261257Z" level=info msg="cleaning up dead shim" Mar 17 20:45:55.822057 env[1151]: time="2025-03-17T20:45:55.822017078Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4013 runtime=io.containerd.runc.v2\n" Mar 17 20:45:56.610760 kubelet[1900]: W0317 20:45:56.610702 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5617a4e_b756_4bcb_921c_7572f6cea0a7.slice/cri-containerd-b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31.scope WatchSource:0}: container "b8ed73f1da17075728ad4aa9fa15cb6a2e3c297db296db3cb230a38ecd60ea31" in namespace "k8s.io": not found Mar 17 20:45:56.617023 systemd[1]: run-containerd-runc-k8s.io-014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30-runc.fHCo6C.mount: Deactivated successfully. Mar 17 20:45:56.617247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30-rootfs.mount: Deactivated successfully. Mar 17 20:45:56.633047 kubelet[1900]: E0317 20:45:56.632977 1900 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5617a4e_b756_4bcb_921c_7572f6cea0a7.slice/cri-containerd-b9f8fa13468b6b5ba4dd47b7f09bb517beda88dafb5a8092ff58f98cc1cc23ba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5617a4e_b756_4bcb_921c_7572f6cea0a7.slice\": RecentStats: unable to find data in memory cache]" Mar 17 20:45:56.653317 env[1151]: time="2025-03-17T20:45:56.650609646Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:45:56.698746 env[1151]: time="2025-03-17T20:45:56.698657269Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b\"" Mar 17 20:45:56.701690 env[1151]: time="2025-03-17T20:45:56.701645755Z" level=info msg="StartContainer for \"bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b\"" Mar 17 20:45:56.729418 systemd[1]: Started cri-containerd-bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b.scope. Mar 17 20:45:56.764075 systemd[1]: cri-containerd-bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b.scope: Deactivated successfully. Mar 17 20:45:56.765647 env[1151]: time="2025-03-17T20:45:56.765519199Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73932885_198b_480c_a08b_4d4cc5294346.slice/cri-containerd-bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b.scope/memory.events\": no such file or directory" Mar 17 20:45:56.769599 env[1151]: time="2025-03-17T20:45:56.769569533Z" level=info msg="StartContainer for \"bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b\" returns successfully" Mar 17 20:45:56.793366 env[1151]: time="2025-03-17T20:45:56.793323798Z" level=info msg="shim disconnected" id=bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b Mar 17 20:45:56.793578 env[1151]: time="2025-03-17T20:45:56.793558841Z" level=warning msg="cleaning up after shim disconnected" id=bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b namespace=k8s.io Mar 17 20:45:56.793702 env[1151]: time="2025-03-17T20:45:56.793685439Z" level=info msg="cleaning up dead shim" Mar 17 20:45:56.800884 env[1151]: time="2025-03-17T20:45:56.800834917Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:45:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4067 runtime=io.containerd.runc.v2\n" Mar 17 20:45:57.046558 kubelet[1900]: E0317 20:45:57.046467 1900 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:45:57.618213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b-rootfs.mount: Deactivated successfully. Mar 17 20:45:57.666092 env[1151]: time="2025-03-17T20:45:57.665817136Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:45:57.713099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371458543.mount: Deactivated successfully. Mar 17 20:45:57.724727 env[1151]: time="2025-03-17T20:45:57.723787398Z" level=info msg="CreateContainer within sandbox \"c74e9495a602ca55acbc09b0805f1429f9cbff84510a9e1da58ab14aa6ead166\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469\"" Mar 17 20:45:57.726221 env[1151]: time="2025-03-17T20:45:57.726158010Z" level=info msg="StartContainer for \"630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469\"" Mar 17 20:45:57.763954 systemd[1]: Started cri-containerd-630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469.scope. Mar 17 20:45:57.797784 env[1151]: time="2025-03-17T20:45:57.797742925Z" level=info msg="StartContainer for \"630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469\" returns successfully" Mar 17 20:45:58.142658 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 20:45:58.210661 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 17 20:45:58.865987 systemd[1]: run-containerd-runc-k8s.io-630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469-runc.cnAvft.mount: Deactivated successfully. Mar 17 20:45:58.903082 kubelet[1900]: E0317 20:45:58.903010 1900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rqtmg" podUID="52befd08-ba1b-4181-87df-e6c24dc736b0" Mar 17 20:45:59.734403 kubelet[1900]: W0317 20:45:59.733790 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73932885_198b_480c_a08b_4d4cc5294346.slice/cri-containerd-2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216.scope WatchSource:0}: task 2be02ef2f2b630484ff9dabab460b3eed4ae5fa002beafbb1133bfd0970c7216 not found: not found Mar 17 20:46:00.903773 kubelet[1900]: E0317 20:46:00.903698 1900 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-rqtmg" podUID="52befd08-ba1b-4181-87df-e6c24dc736b0" Mar 17 20:46:01.003296 systemd[1]: run-containerd-runc-k8s.io-630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469-runc.UQAY5y.mount: Deactivated successfully. Mar 17 20:46:01.595580 systemd-networkd[983]: lxc_health: Link UP Mar 17 20:46:01.621378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 20:46:01.620881 systemd-networkd[983]: lxc_health: Gained carrier Mar 17 20:46:02.083302 kubelet[1900]: I0317 20:46:02.083238 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2vw5w" podStartSLOduration=9.083220769 podStartE2EDuration="9.083220769s" podCreationTimestamp="2025-03-17 20:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:45:58.70425208 +0000 UTC m=+186.927248598" watchObservedRunningTime="2025-03-17 20:46:02.083220769 +0000 UTC m=+190.306217197" Mar 17 20:46:02.842510 kubelet[1900]: W0317 20:46:02.842470 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73932885_198b_480c_a08b_4d4cc5294346.slice/cri-containerd-2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372.scope WatchSource:0}: task 2fd006329f3c7a400ef4c3a6e750851fb0e73f70cb89cff5f39531e006e06372 not found: not found Mar 17 20:46:03.224835 systemd[1]: run-containerd-runc-k8s.io-630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469-runc.Fpus3S.mount: Deactivated successfully. Mar 17 20:46:03.365855 systemd-networkd[983]: lxc_health: Gained IPv6LL Mar 17 20:46:05.461871 systemd[1]: run-containerd-runc-k8s.io-630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469-runc.q1ak8x.mount: Deactivated successfully. Mar 17 20:46:05.955305 kubelet[1900]: W0317 20:46:05.955209 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73932885_198b_480c_a08b_4d4cc5294346.slice/cri-containerd-014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30.scope WatchSource:0}: task 014880c2fdb8a692267970ee98d427fda71f0c154312432f4243032c9bf3ac30 not found: not found Mar 17 20:46:07.676778 systemd[1]: run-containerd-runc-k8s.io-630c9c955a9cb7f8b1bf1cec068017fc880adff79b784c6717b6ac4bdb5fa469-runc.uBqUiT.mount: Deactivated successfully. Mar 17 20:46:08.075275 sshd[3752]: pam_unix(sshd:session): session closed for user core Mar 17 20:46:08.081004 systemd[1]: sshd@25-172.24.4.253:22-172.24.4.1:34098.service: Deactivated successfully. Mar 17 20:46:08.082798 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 20:46:08.084228 systemd-logind[1140]: Session 26 logged out. Waiting for processes to exit. Mar 17 20:46:08.087751 systemd-logind[1140]: Removed session 26. Mar 17 20:46:09.067970 kubelet[1900]: W0317 20:46:09.067826 1900 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73932885_198b_480c_a08b_4d4cc5294346.slice/cri-containerd-bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b.scope WatchSource:0}: task bca43f4ce8900f6914aedd57ccd0ea9818fc46812cd6838a11879d17b9a3880b not found: not found