Dec 13 04:09:54.992357 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 04:09:54.992407 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:09:54.992435 kernel: BIOS-provided physical RAM map: Dec 13 04:09:54.992453 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 04:09:54.992470 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 04:09:54.992486 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 04:09:54.992505 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 04:09:54.992523 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 04:09:54.992543 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 04:09:54.992559 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 04:09:54.992576 kernel: NX (Execute Disable) protection: active Dec 13 04:09:54.992592 kernel: SMBIOS 2.8 present. Dec 13 04:09:54.992609 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 04:09:54.992625 kernel: Hypervisor detected: KVM Dec 13 04:09:54.992645 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 04:09:54.992667 kernel: kvm-clock: cpu 0, msr 5e19b001, primary cpu clock Dec 13 04:09:54.992684 kernel: kvm-clock: using sched offset of 4874119409 cycles Dec 13 04:09:54.992704 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 04:09:54.992723 kernel: tsc: Detected 1996.249 MHz processor Dec 13 04:09:54.992741 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 04:09:54.992760 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 04:09:54.992779 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 04:09:54.992797 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 04:09:54.992819 kernel: ACPI: Early table checksum verification disabled Dec 13 04:09:54.992837 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 04:09:54.992855 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:09:54.992874 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:09:54.992892 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:09:54.992910 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 04:09:54.992928 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:09:54.992946 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 04:09:54.992964 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 04:09:54.992986 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 04:09:54.993004 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 04:09:54.993022 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 04:09:54.993040 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 04:09:54.993057 kernel: No NUMA configuration found Dec 13 04:09:54.993075 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 04:09:54.993093 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 04:09:54.993111 kernel: Zone ranges: Dec 13 04:09:54.993138 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 04:09:54.993157 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 04:09:54.993175 kernel: Normal empty Dec 13 04:09:54.993194 kernel: Movable zone start for each node Dec 13 04:09:54.993250 kernel: Early memory node ranges Dec 13 04:09:54.993269 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 04:09:54.993292 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 04:09:54.993311 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 04:09:54.993330 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 04:09:54.993348 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 04:09:54.993367 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 04:09:54.993386 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 04:09:54.993426 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 04:09:54.993445 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 04:09:54.993464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 04:09:54.993487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 04:09:54.993506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 04:09:54.993525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 04:09:54.993544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 04:09:54.993563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 04:09:54.993582 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 04:09:54.993600 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 04:09:54.993619 kernel: Booting paravirtualized kernel on KVM Dec 13 04:09:54.993639 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 04:09:54.993658 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 04:09:54.993682 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 04:09:54.993701 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 04:09:54.993719 kernel: pcpu-alloc: [0] 0 1 Dec 13 04:09:54.993738 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 04:09:54.993756 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 04:09:54.993775 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 04:09:54.993794 kernel: Policy zone: DMA32 Dec 13 04:09:54.993816 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:09:54.993840 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 04:09:54.993860 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 04:09:54.993879 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 04:09:54.993898 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 04:09:54.993918 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved) Dec 13 04:09:54.993937 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 04:09:54.993956 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 04:09:54.993976 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 04:09:54.993997 kernel: rcu: Hierarchical RCU implementation. Dec 13 04:09:54.994017 kernel: rcu: RCU event tracing is enabled. Dec 13 04:09:54.994037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 04:09:54.994056 kernel: Rude variant of Tasks RCU enabled. Dec 13 04:09:54.994075 kernel: Tracing variant of Tasks RCU enabled. Dec 13 04:09:54.994095 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 04:09:54.994114 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 04:09:54.994133 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 04:09:54.994152 kernel: Console: colour VGA+ 80x25 Dec 13 04:09:54.994174 kernel: printk: console [tty0] enabled Dec 13 04:09:54.994193 kernel: printk: console [ttyS0] enabled Dec 13 04:09:54.996283 kernel: ACPI: Core revision 20210730 Dec 13 04:09:54.996310 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 04:09:54.996329 kernel: x2apic enabled Dec 13 04:09:54.996348 kernel: Switched APIC routing to physical x2apic. Dec 13 04:09:54.996367 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 04:09:54.996386 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 04:09:54.996406 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 04:09:54.996425 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 04:09:54.996452 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 04:09:54.996471 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 04:09:54.996490 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 04:09:54.996509 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 04:09:54.996528 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 04:09:54.996547 kernel: Speculative Store Bypass: Vulnerable Dec 13 04:09:54.996566 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 04:09:54.996585 kernel: Freeing SMP alternatives memory: 32K Dec 13 04:09:54.996603 kernel: pid_max: default: 32768 minimum: 301 Dec 13 04:09:54.996625 kernel: LSM: Security Framework initializing Dec 13 04:09:54.996643 kernel: SELinux: Initializing. Dec 13 04:09:54.996662 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:09:54.996681 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 04:09:54.996701 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 04:09:54.996720 kernel: Performance Events: AMD PMU driver. Dec 13 04:09:54.996738 kernel: ... version: 0 Dec 13 04:09:54.996757 kernel: ... bit width: 48 Dec 13 04:09:54.996776 kernel: ... generic registers: 4 Dec 13 04:09:54.996808 kernel: ... value mask: 0000ffffffffffff Dec 13 04:09:54.996828 kernel: ... max period: 00007fffffffffff Dec 13 04:09:54.996850 kernel: ... fixed-purpose events: 0 Dec 13 04:09:54.996869 kernel: ... event mask: 000000000000000f Dec 13 04:09:54.996889 kernel: signal: max sigframe size: 1440 Dec 13 04:09:54.996909 kernel: rcu: Hierarchical SRCU implementation. Dec 13 04:09:54.996928 kernel: smp: Bringing up secondary CPUs ... Dec 13 04:09:54.996948 kernel: x86: Booting SMP configuration: Dec 13 04:09:54.996970 kernel: .... node #0, CPUs: #1 Dec 13 04:09:54.996990 kernel: kvm-clock: cpu 1, msr 5e19b041, secondary cpu clock Dec 13 04:09:54.997009 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 04:09:54.997029 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 04:09:54.997048 kernel: smpboot: Max logical packages: 2 Dec 13 04:09:54.997068 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 04:09:54.997087 kernel: devtmpfs: initialized Dec 13 04:09:54.997107 kernel: x86/mm: Memory block size: 128MB Dec 13 04:09:54.997127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 04:09:54.997150 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 04:09:54.997170 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 04:09:54.997189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 04:09:54.997238 kernel: audit: initializing netlink subsys (disabled) Dec 13 04:09:54.997259 kernel: audit: type=2000 audit(1734062993.653:1): state=initialized audit_enabled=0 res=1 Dec 13 04:09:54.997279 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 04:09:54.997298 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 04:09:54.997317 kernel: cpuidle: using governor menu Dec 13 04:09:54.997337 kernel: ACPI: bus type PCI registered Dec 13 04:09:54.997361 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 04:09:54.997381 kernel: dca service started, version 1.12.1 Dec 13 04:09:54.997425 kernel: PCI: Using configuration type 1 for base access Dec 13 04:09:54.997446 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 04:09:54.997467 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 04:09:54.997486 kernel: ACPI: Added _OSI(Module Device) Dec 13 04:09:54.997506 kernel: ACPI: Added _OSI(Processor Device) Dec 13 04:09:54.997525 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 04:09:54.997544 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 04:09:54.997568 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 04:09:54.997587 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 04:09:54.997606 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 04:09:54.997626 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 04:09:54.997646 kernel: ACPI: Interpreter enabled Dec 13 04:09:54.997665 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 04:09:54.997685 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 04:09:54.997705 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 04:09:54.997724 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 04:09:54.997747 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 04:09:54.998047 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 04:09:54.998302 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 04:09:54.998336 kernel: acpiphp: Slot [3] registered Dec 13 04:09:54.998356 kernel: acpiphp: Slot [4] registered Dec 13 04:09:54.998376 kernel: acpiphp: Slot [5] registered Dec 13 04:09:54.998395 kernel: acpiphp: Slot [6] registered Dec 13 04:09:54.998422 kernel: acpiphp: Slot [7] registered Dec 13 04:09:54.998441 kernel: acpiphp: Slot [8] registered Dec 13 04:09:54.998460 kernel: acpiphp: Slot [9] registered Dec 13 04:09:54.998479 kernel: acpiphp: Slot [10] registered Dec 13 04:09:54.998499 kernel: acpiphp: Slot [11] registered Dec 13 04:09:54.998518 kernel: acpiphp: Slot [12] registered Dec 13 04:09:54.998538 kernel: acpiphp: Slot [13] registered Dec 13 04:09:54.998557 kernel: acpiphp: Slot [14] registered Dec 13 04:09:54.998576 kernel: acpiphp: Slot [15] registered Dec 13 04:09:54.998595 kernel: acpiphp: Slot [16] registered Dec 13 04:09:54.998618 kernel: acpiphp: Slot [17] registered Dec 13 04:09:54.998637 kernel: acpiphp: Slot [18] registered Dec 13 04:09:54.998657 kernel: acpiphp: Slot [19] registered Dec 13 04:09:54.998676 kernel: acpiphp: Slot [20] registered Dec 13 04:09:54.998696 kernel: acpiphp: Slot [21] registered Dec 13 04:09:54.998715 kernel: acpiphp: Slot [22] registered Dec 13 04:09:54.998734 kernel: acpiphp: Slot [23] registered Dec 13 04:09:54.998753 kernel: acpiphp: Slot [24] registered Dec 13 04:09:54.998772 kernel: acpiphp: Slot [25] registered Dec 13 04:09:54.998795 kernel: acpiphp: Slot [26] registered Dec 13 04:09:54.998814 kernel: acpiphp: Slot [27] registered Dec 13 04:09:54.998833 kernel: acpiphp: Slot [28] registered Dec 13 04:09:54.998848 kernel: acpiphp: Slot [29] registered Dec 13 04:09:54.998862 kernel: acpiphp: Slot [30] registered Dec 13 04:09:54.998877 kernel: acpiphp: Slot [31] registered Dec 13 04:09:54.998891 kernel: PCI host bridge to bus 0000:00 Dec 13 04:09:54.999058 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 04:09:54.999196 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 04:09:54.999378 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 04:09:54.999512 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 04:09:54.999644 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 04:09:54.999775 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 04:09:54.999947 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 04:09:55.000111 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 04:09:55.005293 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 04:09:55.005382 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 04:09:55.005483 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 04:09:55.005566 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 04:09:55.005653 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 04:09:55.005739 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 04:09:55.005833 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 04:09:55.005926 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 04:09:55.006011 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 04:09:55.006105 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 04:09:55.006193 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 04:09:55.006328 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 04:09:55.006422 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 04:09:55.006510 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 04:09:55.006595 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 04:09:55.006692 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 04:09:55.006779 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 04:09:55.006864 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 04:09:55.006944 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 04:09:55.007022 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 04:09:55.007111 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 04:09:55.007191 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 04:09:55.011378 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 04:09:55.011467 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 04:09:55.011559 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 04:09:55.011643 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 04:09:55.011725 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 04:09:55.011829 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 04:09:55.011913 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 04:09:55.011995 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 04:09:55.012006 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 04:09:55.012015 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 04:09:55.012023 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 04:09:55.012031 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 04:09:55.012039 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 04:09:55.012050 kernel: iommu: Default domain type: Translated Dec 13 04:09:55.012058 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 04:09:55.012139 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 04:09:55.012235 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 04:09:55.012320 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 04:09:55.012331 kernel: vgaarb: loaded Dec 13 04:09:55.012340 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 04:09:55.012348 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 04:09:55.012356 kernel: PTP clock support registered Dec 13 04:09:55.012367 kernel: PCI: Using ACPI for IRQ routing Dec 13 04:09:55.012375 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 04:09:55.012383 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 04:09:55.012391 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 04:09:55.012399 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 04:09:55.012407 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 04:09:55.012414 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 04:09:55.012422 kernel: pnp: PnP ACPI init Dec 13 04:09:55.012505 kernel: pnp 00:03: [dma 2] Dec 13 04:09:55.012521 kernel: pnp: PnP ACPI: found 5 devices Dec 13 04:09:55.012529 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 04:09:55.012537 kernel: NET: Registered PF_INET protocol family Dec 13 04:09:55.012545 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 04:09:55.012553 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 04:09:55.012562 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 04:09:55.012570 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 04:09:55.012578 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 04:09:55.012587 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 04:09:55.012595 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:09:55.012603 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 04:09:55.012611 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 04:09:55.012619 kernel: NET: Registered PF_XDP protocol family Dec 13 04:09:55.012691 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 04:09:55.012764 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 04:09:55.012835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 04:09:55.012904 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 04:09:55.012979 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 04:09:55.013061 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 04:09:55.013151 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 04:09:55.013250 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 04:09:55.013263 kernel: PCI: CLS 0 bytes, default 64 Dec 13 04:09:55.013271 kernel: Initialise system trusted keyrings Dec 13 04:09:55.013279 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 04:09:55.013290 kernel: Key type asymmetric registered Dec 13 04:09:55.013298 kernel: Asymmetric key parser 'x509' registered Dec 13 04:09:55.013306 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 04:09:55.013314 kernel: io scheduler mq-deadline registered Dec 13 04:09:55.013322 kernel: io scheduler kyber registered Dec 13 04:09:55.013330 kernel: io scheduler bfq registered Dec 13 04:09:55.013338 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 04:09:55.013346 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 04:09:55.013355 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 04:09:55.013363 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 04:09:55.013373 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 04:09:55.013380 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 04:09:55.013388 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 04:09:55.013396 kernel: random: crng init done Dec 13 04:09:55.013414 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 04:09:55.013422 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 04:09:55.013430 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 04:09:55.013520 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 04:09:55.013537 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 04:09:55.013611 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 04:09:55.013691 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T04:09:54 UTC (1734062994) Dec 13 04:09:55.013770 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 04:09:55.013782 kernel: NET: Registered PF_INET6 protocol family Dec 13 04:09:55.013791 kernel: Segment Routing with IPv6 Dec 13 04:09:55.013799 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 04:09:55.013808 kernel: NET: Registered PF_PACKET protocol family Dec 13 04:09:55.013816 kernel: Key type dns_resolver registered Dec 13 04:09:55.013827 kernel: IPI shorthand broadcast: enabled Dec 13 04:09:55.013836 kernel: sched_clock: Marking stable (716007546, 123444893)->(870430151, -30977712) Dec 13 04:09:55.013845 kernel: registered taskstats version 1 Dec 13 04:09:55.013853 kernel: Loading compiled-in X.509 certificates Dec 13 04:09:55.013862 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 04:09:55.013871 kernel: Key type .fscrypt registered Dec 13 04:09:55.013879 kernel: Key type fscrypt-provisioning registered Dec 13 04:09:55.013888 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 04:09:55.013898 kernel: ima: Allocated hash algorithm: sha1 Dec 13 04:09:55.013906 kernel: ima: No architecture policies found Dec 13 04:09:55.013915 kernel: clk: Disabling unused clocks Dec 13 04:09:55.013923 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 04:09:55.013932 kernel: Write protecting the kernel read-only data: 28672k Dec 13 04:09:55.013940 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 04:09:55.013949 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 04:09:55.013957 kernel: Run /init as init process Dec 13 04:09:55.013966 kernel: with arguments: Dec 13 04:09:55.013976 kernel: /init Dec 13 04:09:55.013984 kernel: with environment: Dec 13 04:09:55.013992 kernel: HOME=/ Dec 13 04:09:55.014001 kernel: TERM=linux Dec 13 04:09:55.014009 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 04:09:55.014020 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:09:55.014031 systemd[1]: Detected virtualization kvm. Dec 13 04:09:55.014041 systemd[1]: Detected architecture x86-64. Dec 13 04:09:55.014053 systemd[1]: Running in initrd. Dec 13 04:09:55.014062 systemd[1]: No hostname configured, using default hostname. Dec 13 04:09:55.014071 systemd[1]: Hostname set to . Dec 13 04:09:55.014081 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:09:55.014090 systemd[1]: Queued start job for default target initrd.target. Dec 13 04:09:55.014099 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:09:55.014108 systemd[1]: Reached target cryptsetup.target. Dec 13 04:09:55.014118 systemd[1]: Reached target paths.target. Dec 13 04:09:55.014128 systemd[1]: Reached target slices.target. Dec 13 04:09:55.014137 systemd[1]: Reached target swap.target. Dec 13 04:09:55.014146 systemd[1]: Reached target timers.target. Dec 13 04:09:55.014156 systemd[1]: Listening on iscsid.socket. Dec 13 04:09:55.014165 systemd[1]: Listening on iscsiuio.socket. Dec 13 04:09:55.014174 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 04:09:55.014183 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 04:09:55.014194 systemd[1]: Listening on systemd-journald.socket. Dec 13 04:09:55.014218 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:09:55.014228 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:09:55.014237 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:09:55.014246 systemd[1]: Reached target sockets.target. Dec 13 04:09:55.014266 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:09:55.014278 systemd[1]: Finished network-cleanup.service. Dec 13 04:09:55.014289 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 04:09:55.014298 systemd[1]: Starting systemd-journald.service... Dec 13 04:09:55.014307 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:09:55.014317 systemd[1]: Starting systemd-resolved.service... Dec 13 04:09:55.014326 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 04:09:55.014336 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:09:55.014345 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 04:09:55.014358 systemd-journald[184]: Journal started Dec 13 04:09:55.014414 systemd-journald[184]: Runtime Journal (/run/log/journal/4ae3c69e96f0453b8e4a897d5f04710a) is 4.9M, max 39.5M, 34.5M free. Dec 13 04:09:54.970278 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 04:09:55.036887 systemd[1]: Started systemd-journald.service. Dec 13 04:09:55.036931 kernel: audit: type=1130 audit(1734062995.031:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.023761 systemd-resolved[186]: Positive Trust Anchors: Dec 13 04:09:55.043669 kernel: audit: type=1130 audit(1734062995.037:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.043690 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 04:09:55.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.023772 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:09:55.048762 kernel: audit: type=1130 audit(1734062995.043:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.023808 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:09:55.056461 kernel: audit: type=1130 audit(1734062995.049:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.056482 kernel: Bridge firewalling registered Dec 13 04:09:55.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.026436 systemd-resolved[186]: Defaulting to hostname 'linux'. Dec 13 04:09:55.037623 systemd[1]: Started systemd-resolved.service. Dec 13 04:09:55.044517 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 04:09:55.049611 systemd[1]: Reached target nss-lookup.target. Dec 13 04:09:55.057891 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 04:09:55.058125 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 04:09:55.060078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:09:55.072014 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:09:55.077886 kernel: audit: type=1130 audit(1734062995.072:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.079674 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 04:09:55.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.085233 kernel: audit: type=1130 audit(1734062995.080:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.086427 systemd[1]: Starting dracut-cmdline.service... Dec 13 04:09:55.096361 dracut-cmdline[201]: dracut-dracut-053 Dec 13 04:09:55.099259 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 04:09:55.101612 kernel: SCSI subsystem initialized Dec 13 04:09:55.116867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 04:09:55.116908 kernel: device-mapper: uevent: version 1.0.3 Dec 13 04:09:55.118634 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 04:09:55.123048 systemd-modules-load[185]: Inserted module 'dm_multipath' Dec 13 04:09:55.123893 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:09:55.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.128594 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:09:55.129322 kernel: audit: type=1130 audit(1734062995.124:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.136369 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:09:55.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.141568 kernel: audit: type=1130 audit(1734062995.136:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.166245 kernel: Loading iSCSI transport class v2.0-870. Dec 13 04:09:55.186233 kernel: iscsi: registered transport (tcp) Dec 13 04:09:55.212755 kernel: iscsi: registered transport (qla4xxx) Dec 13 04:09:55.212810 kernel: QLogic iSCSI HBA Driver Dec 13 04:09:55.252424 systemd[1]: Finished dracut-cmdline.service. Dec 13 04:09:55.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.255091 systemd[1]: Starting dracut-pre-udev.service... Dec 13 04:09:55.260138 kernel: audit: type=1130 audit(1734062995.253:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.330346 kernel: raid6: sse2x4 gen() 12855 MB/s Dec 13 04:09:55.347295 kernel: raid6: sse2x4 xor() 7352 MB/s Dec 13 04:09:55.364417 kernel: raid6: sse2x2 gen() 14730 MB/s Dec 13 04:09:55.382373 kernel: raid6: sse2x2 xor() 8402 MB/s Dec 13 04:09:55.399515 kernel: raid6: sse2x1 gen() 10754 MB/s Dec 13 04:09:55.417060 kernel: raid6: sse2x1 xor() 6838 MB/s Dec 13 04:09:55.417121 kernel: raid6: using algorithm sse2x2 gen() 14730 MB/s Dec 13 04:09:55.417148 kernel: raid6: .... xor() 8402 MB/s, rmw enabled Dec 13 04:09:55.417948 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 04:09:55.433828 kernel: xor: measuring software checksum speed Dec 13 04:09:55.433900 kernel: prefetch64-sse : 18279 MB/sec Dec 13 04:09:55.434923 kernel: generic_sse : 15595 MB/sec Dec 13 04:09:55.434981 kernel: xor: using function: prefetch64-sse (18279 MB/sec) Dec 13 04:09:55.547287 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 04:09:55.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.561751 systemd[1]: Finished dracut-pre-udev.service. Dec 13 04:09:55.563000 audit: BPF prog-id=7 op=LOAD Dec 13 04:09:55.564000 audit: BPF prog-id=8 op=LOAD Dec 13 04:09:55.565390 systemd[1]: Starting systemd-udevd.service... Dec 13 04:09:55.577815 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 04:09:55.582576 systemd[1]: Started systemd-udevd.service. Dec 13 04:09:55.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.589014 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 04:09:55.613089 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 04:09:55.655296 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 04:09:55.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.656583 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:09:55.709619 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:09:55.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:55.761169 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 04:09:55.789600 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 04:09:55.789616 kernel: GPT:17805311 != 41943039 Dec 13 04:09:55.789627 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 04:09:55.789637 kernel: GPT:17805311 != 41943039 Dec 13 04:09:55.789647 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 04:09:55.789657 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:09:55.820236 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (435) Dec 13 04:09:55.822394 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 04:09:55.867389 kernel: libata version 3.00 loaded. Dec 13 04:09:55.867411 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 04:09:55.867554 kernel: scsi host0: ata_piix Dec 13 04:09:55.867676 kernel: scsi host1: ata_piix Dec 13 04:09:55.867772 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 04:09:55.867784 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 04:09:55.871107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 04:09:55.872359 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 04:09:55.876917 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 04:09:55.881526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:09:55.884395 systemd[1]: Starting disk-uuid.service... Dec 13 04:09:55.895077 disk-uuid[461]: Primary Header is updated. Dec 13 04:09:55.895077 disk-uuid[461]: Secondary Entries is updated. Dec 13 04:09:55.895077 disk-uuid[461]: Secondary Header is updated. Dec 13 04:09:55.904231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:09:55.909229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:09:56.921253 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 04:09:56.922114 disk-uuid[462]: The operation has completed successfully. Dec 13 04:09:56.986903 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 04:09:56.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:56.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:56.987130 systemd[1]: Finished disk-uuid.service. Dec 13 04:09:57.001706 systemd[1]: Starting verity-setup.service... Dec 13 04:09:57.030522 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 04:09:57.127633 systemd[1]: Found device dev-mapper-usr.device. Dec 13 04:09:57.131585 systemd[1]: Mounting sysusr-usr.mount... Dec 13 04:09:57.138474 systemd[1]: Finished verity-setup.service. Dec 13 04:09:57.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.261274 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 04:09:57.261189 systemd[1]: Mounted sysusr-usr.mount. Dec 13 04:09:57.261777 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 04:09:57.262447 systemd[1]: Starting ignition-setup.service... Dec 13 04:09:57.265588 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 04:09:57.290583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:09:57.290625 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:09:57.290638 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:09:57.312413 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 04:09:57.324645 systemd[1]: Finished ignition-setup.service. Dec 13 04:09:57.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.325902 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 04:09:57.380304 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 04:09:57.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.382000 audit: BPF prog-id=9 op=LOAD Dec 13 04:09:57.383432 systemd[1]: Starting systemd-networkd.service... Dec 13 04:09:57.406553 systemd-networkd[632]: lo: Link UP Dec 13 04:09:57.406565 systemd-networkd[632]: lo: Gained carrier Dec 13 04:09:57.407249 systemd-networkd[632]: Enumeration completed Dec 13 04:09:57.407637 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:09:57.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.409321 systemd[1]: Started systemd-networkd.service. Dec 13 04:09:57.409408 systemd-networkd[632]: eth0: Link UP Dec 13 04:09:57.409413 systemd-networkd[632]: eth0: Gained carrier Dec 13 04:09:57.409931 systemd[1]: Reached target network.target. Dec 13 04:09:57.411078 systemd[1]: Starting iscsiuio.service... Dec 13 04:09:57.416936 systemd[1]: Started iscsiuio.service. Dec 13 04:09:57.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.418963 systemd[1]: Starting iscsid.service... Dec 13 04:09:57.421854 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:09:57.421854 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 04:09:57.421854 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 04:09:57.421854 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 04:09:57.421854 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 04:09:57.421854 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 04:09:57.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.424903 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.188/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 04:09:57.425737 systemd[1]: Started iscsid.service. Dec 13 04:09:57.429636 systemd[1]: Starting dracut-initqueue.service... Dec 13 04:09:57.440387 systemd[1]: Finished dracut-initqueue.service. Dec 13 04:09:57.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.441192 systemd[1]: Reached target remote-fs-pre.target. Dec 13 04:09:57.441899 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:09:57.443062 systemd[1]: Reached target remote-fs.target. Dec 13 04:09:57.444957 systemd[1]: Starting dracut-pre-mount.service... Dec 13 04:09:57.454510 systemd[1]: Finished dracut-pre-mount.service. Dec 13 04:09:57.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.586358 ignition[568]: Ignition 2.14.0 Dec 13 04:09:57.587145 ignition[568]: Stage: fetch-offline Dec 13 04:09:57.587680 ignition[568]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:09:57.588452 ignition[568]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:09:57.590892 ignition[568]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:09:57.591138 ignition[568]: parsed url from cmdline: "" Dec 13 04:09:57.591149 ignition[568]: no config URL provided Dec 13 04:09:57.591163 ignition[568]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:09:57.591183 ignition[568]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:09:57.591236 ignition[568]: failed to fetch config: resource requires networking Dec 13 04:09:57.593767 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 04:09:57.591790 ignition[568]: Ignition finished successfully Dec 13 04:09:57.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.597113 systemd[1]: Starting ignition-fetch.service... Dec 13 04:09:57.612927 ignition[655]: Ignition 2.14.0 Dec 13 04:09:57.612956 ignition[655]: Stage: fetch Dec 13 04:09:57.613191 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:09:57.613277 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:09:57.615476 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:09:57.615707 ignition[655]: parsed url from cmdline: "" Dec 13 04:09:57.615717 ignition[655]: no config URL provided Dec 13 04:09:57.615730 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 04:09:57.615749 ignition[655]: no config at "/usr/lib/ignition/user.ign" Dec 13 04:09:57.622695 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 04:09:57.623072 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 04:09:57.623112 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 04:09:57.965373 ignition[655]: GET result: OK Dec 13 04:09:57.965585 ignition[655]: parsing config with SHA512: 9dcd66659bceb01fa0b2b6184f410b9b938ef3cf32727a1f422309195d5d18e87f1887856a97c0fcce3c0d26a25436dde8556168473e92bc35dcf49906c11837 Dec 13 04:09:57.981803 unknown[655]: fetched base config from "system" Dec 13 04:09:57.983198 unknown[655]: fetched base config from "system" Dec 13 04:09:57.984554 unknown[655]: fetched user config from "openstack" Dec 13 04:09:57.986472 ignition[655]: fetch: fetch complete Dec 13 04:09:57.986501 ignition[655]: fetch: fetch passed Dec 13 04:09:57.986649 ignition[655]: Ignition finished successfully Dec 13 04:09:57.989556 systemd[1]: Finished ignition-fetch.service. Dec 13 04:09:57.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:57.993192 systemd[1]: Starting ignition-kargs.service... Dec 13 04:09:58.015128 ignition[661]: Ignition 2.14.0 Dec 13 04:09:58.016796 ignition[661]: Stage: kargs Dec 13 04:09:58.018195 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:09:58.019988 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:09:58.022417 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:09:58.030362 ignition[661]: kargs: kargs passed Dec 13 04:09:58.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.032908 systemd[1]: Finished ignition-kargs.service. Dec 13 04:09:58.030477 ignition[661]: Ignition finished successfully Dec 13 04:09:58.036005 systemd[1]: Starting ignition-disks.service... Dec 13 04:09:58.054621 ignition[667]: Ignition 2.14.0 Dec 13 04:09:58.055904 ignition[667]: Stage: disks Dec 13 04:09:58.056948 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:09:58.058318 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:09:58.059444 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:09:58.061184 ignition[667]: disks: disks passed Dec 13 04:09:58.066924 ignition[667]: Ignition finished successfully Dec 13 04:09:58.069383 systemd[1]: Finished ignition-disks.service. Dec 13 04:09:58.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.071124 systemd[1]: Reached target initrd-root-device.target. Dec 13 04:09:58.072789 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:09:58.074376 systemd[1]: Reached target local-fs.target. Dec 13 04:09:58.075865 systemd[1]: Reached target sysinit.target. Dec 13 04:09:58.077336 systemd[1]: Reached target basic.target. Dec 13 04:09:58.080637 systemd[1]: Starting systemd-fsck-root.service... Dec 13 04:09:58.104494 systemd-fsck[675]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 04:09:58.116399 systemd[1]: Finished systemd-fsck-root.service. Dec 13 04:09:58.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.123138 systemd[1]: Mounting sysroot.mount... Dec 13 04:09:58.142249 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 04:09:58.143838 systemd[1]: Mounted sysroot.mount. Dec 13 04:09:58.145120 systemd[1]: Reached target initrd-root-fs.target. Dec 13 04:09:58.149613 systemd[1]: Mounting sysroot-usr.mount... Dec 13 04:09:58.153761 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 04:09:58.157081 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 04:09:58.158404 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 04:09:58.158472 systemd[1]: Reached target ignition-diskful.target. Dec 13 04:09:58.162015 systemd[1]: Mounted sysroot-usr.mount. Dec 13 04:09:58.173124 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:09:58.180596 systemd[1]: Starting initrd-setup-root.service... Dec 13 04:09:58.194144 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 04:09:58.204225 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Dec 13 04:09:58.210444 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:09:58.210471 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:09:58.210483 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:09:58.217538 initrd-setup-root[711]: cut: /sysroot/etc/group: No such file or directory Dec 13 04:09:58.223703 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 04:09:58.228521 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 04:09:58.233453 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:09:58.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.299120 systemd[1]: Finished initrd-setup-root.service. Dec 13 04:09:58.300482 systemd[1]: Starting ignition-mount.service... Dec 13 04:09:58.301589 systemd[1]: Starting sysroot-boot.service... Dec 13 04:09:58.309061 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 04:09:58.309176 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 04:09:58.332356 ignition[749]: INFO : Ignition 2.14.0 Dec 13 04:09:58.333458 ignition[749]: INFO : Stage: mount Dec 13 04:09:58.334125 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:09:58.334912 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:09:58.336837 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:09:58.339422 ignition[749]: INFO : mount: mount passed Dec 13 04:09:58.339978 ignition[749]: INFO : Ignition finished successfully Dec 13 04:09:58.341334 systemd[1]: Finished ignition-mount.service. Dec 13 04:09:58.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.349811 systemd[1]: Finished sysroot-boot.service. Dec 13 04:09:58.367658 coreos-metadata[681]: Dec 13 04:09:58.367 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 04:09:58.386368 coreos-metadata[681]: Dec 13 04:09:58.386 INFO Fetch successful Dec 13 04:09:58.387707 coreos-metadata[681]: Dec 13 04:09:58.387 INFO wrote hostname ci-3510-3-6-e-a81afd2c25.novalocal to /sysroot/etc/hostname Dec 13 04:09:58.390888 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 04:09:58.391093 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 04:09:58.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:09:58.394556 systemd[1]: Starting ignition-files.service... Dec 13 04:09:58.405135 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 04:09:58.414308 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (759) Dec 13 04:09:58.417918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 04:09:58.417974 kernel: BTRFS info (device vda6): using free space tree Dec 13 04:09:58.418000 kernel: BTRFS info (device vda6): has skinny extents Dec 13 04:09:58.430512 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 04:09:58.442537 ignition[778]: INFO : Ignition 2.14.0 Dec 13 04:09:58.443468 ignition[778]: INFO : Stage: files Dec 13 04:09:58.444068 ignition[778]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:09:58.444788 ignition[778]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:09:58.446758 ignition[778]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:09:58.451478 ignition[778]: DEBUG : files: compiled without relabeling support, skipping Dec 13 04:09:58.452494 ignition[778]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 04:09:58.453226 ignition[778]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 04:09:58.459350 ignition[778]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 04:09:58.460052 ignition[778]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 04:09:58.460928 unknown[778]: wrote ssh authorized keys file for user: core Dec 13 04:09:58.461614 ignition[778]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 04:09:58.462558 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 04:09:58.463490 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 04:09:58.521068 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 04:09:58.826784 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 04:09:58.829323 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 04:09:58.829323 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 04:09:58.854511 systemd-networkd[632]: eth0: Gained IPv6LL Dec 13 04:09:59.387114 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 04:09:59.818878 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 04:09:59.818878 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:09:59.823415 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 04:10:00.287662 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 04:10:01.960279 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 04:10:01.960279 ignition[778]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:10:01.960279 ignition[778]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 04:10:01.960279 ignition[778]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 04:10:01.968922 ignition[778]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 04:10:01.968922 ignition[778]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:10:01.968922 ignition[778]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 04:10:01.968922 ignition[778]: INFO : files: files passed Dec 13 04:10:01.968922 ignition[778]: INFO : Ignition finished successfully Dec 13 04:10:01.989565 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 04:10:01.989587 kernel: audit: type=1130 audit(1734063001.972:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:01.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:01.970146 systemd[1]: Finished ignition-files.service. Dec 13 04:10:01.973512 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 04:10:02.011759 kernel: audit: type=1130 audit(1734063001.993:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.011810 kernel: audit: type=1131 audit(1734063001.993:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:01.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:01.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:01.984082 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 04:10:02.017486 kernel: audit: type=1130 audit(1734063002.011:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.017511 initrd-setup-root-after-ignition[803]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 04:10:01.985836 systemd[1]: Starting ignition-quench.service... Dec 13 04:10:01.992267 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 04:10:01.992474 systemd[1]: Finished ignition-quench.service. Dec 13 04:10:01.994417 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 04:10:02.013595 systemd[1]: Reached target ignition-complete.target. Dec 13 04:10:02.018285 systemd[1]: Starting initrd-parse-etc.service... Dec 13 04:10:02.043130 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 04:10:02.061515 kernel: audit: type=1130 audit(1734063002.043:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.061566 kernel: audit: type=1131 audit(1734063002.043:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.043260 systemd[1]: Finished initrd-parse-etc.service. Dec 13 04:10:02.043847 systemd[1]: Reached target initrd-fs.target. Dec 13 04:10:02.061804 systemd[1]: Reached target initrd.target. Dec 13 04:10:02.063302 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 04:10:02.064056 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 04:10:02.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.075827 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 04:10:02.081236 kernel: audit: type=1130 audit(1734063002.076:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.080162 systemd[1]: Starting initrd-cleanup.service... Dec 13 04:10:02.090439 systemd[1]: Stopped target nss-lookup.target. Dec 13 04:10:02.091509 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 04:10:02.092560 systemd[1]: Stopped target timers.target. Dec 13 04:10:02.093559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 04:10:02.094294 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 04:10:02.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.098717 systemd[1]: Stopped target initrd.target. Dec 13 04:10:02.099650 kernel: audit: type=1131 audit(1734063002.095:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.099304 systemd[1]: Stopped target basic.target. Dec 13 04:10:02.100146 systemd[1]: Stopped target ignition-complete.target. Dec 13 04:10:02.101029 systemd[1]: Stopped target ignition-diskful.target. Dec 13 04:10:02.101965 systemd[1]: Stopped target initrd-root-device.target. Dec 13 04:10:02.102890 systemd[1]: Stopped target remote-fs.target. Dec 13 04:10:02.103770 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 04:10:02.104616 systemd[1]: Stopped target sysinit.target. Dec 13 04:10:02.105504 systemd[1]: Stopped target local-fs.target. Dec 13 04:10:02.106410 systemd[1]: Stopped target local-fs-pre.target. Dec 13 04:10:02.107281 systemd[1]: Stopped target swap.target. Dec 13 04:10:02.108080 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 04:10:02.112553 kernel: audit: type=1131 audit(1734063002.108:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.108220 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 04:10:02.109134 systemd[1]: Stopped target cryptsetup.target. Dec 13 04:10:02.117490 kernel: audit: type=1131 audit(1734063002.113:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.113045 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 04:10:02.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.113187 systemd[1]: Stopped dracut-initqueue.service. Dec 13 04:10:02.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.114067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 04:10:02.114237 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 04:10:02.118078 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 04:10:02.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.118235 systemd[1]: Stopped ignition-files.service. Dec 13 04:10:02.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.119861 systemd[1]: Stopping ignition-mount.service... Dec 13 04:10:02.134639 ignition[816]: INFO : Ignition 2.14.0 Dec 13 04:10:02.134639 ignition[816]: INFO : Stage: umount Dec 13 04:10:02.134639 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 04:10:02.134639 ignition[816]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 04:10:02.134639 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 04:10:02.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.121321 systemd[1]: Stopping iscsiuio.service... Dec 13 04:10:02.144735 ignition[816]: INFO : umount: umount passed Dec 13 04:10:02.144735 ignition[816]: INFO : Ignition finished successfully Dec 13 04:10:02.125430 systemd[1]: Stopping sysroot-boot.service... Dec 13 04:10:02.125844 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 04:10:02.125974 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 04:10:02.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.126685 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 04:10:02.126803 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 04:10:02.128907 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 04:10:02.129099 systemd[1]: Stopped iscsiuio.service. Dec 13 04:10:02.131821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 04:10:02.131904 systemd[1]: Finished initrd-cleanup.service. Dec 13 04:10:02.137676 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 04:10:02.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.137761 systemd[1]: Stopped ignition-mount.service. Dec 13 04:10:02.138273 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 04:10:02.138312 systemd[1]: Stopped ignition-disks.service. Dec 13 04:10:02.138798 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 04:10:02.138834 systemd[1]: Stopped ignition-kargs.service. Dec 13 04:10:02.139289 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 04:10:02.139324 systemd[1]: Stopped ignition-fetch.service. Dec 13 04:10:02.147304 systemd[1]: Stopped target network.target. Dec 13 04:10:02.150286 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 04:10:02.150369 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 04:10:02.152055 systemd[1]: Stopped target paths.target. Dec 13 04:10:02.152973 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 04:10:02.158238 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 04:10:02.158853 systemd[1]: Stopped target slices.target. Dec 13 04:10:02.159710 systemd[1]: Stopped target sockets.target. Dec 13 04:10:02.160771 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 04:10:02.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.160801 systemd[1]: Closed iscsid.socket. Dec 13 04:10:02.161668 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 04:10:02.161703 systemd[1]: Closed iscsiuio.socket. Dec 13 04:10:02.162518 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 04:10:02.162559 systemd[1]: Stopped ignition-setup.service. Dec 13 04:10:02.163799 systemd[1]: Stopping systemd-networkd.service... Dec 13 04:10:02.164528 systemd[1]: Stopping systemd-resolved.service... Dec 13 04:10:02.166278 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 04:10:02.168250 systemd-networkd[632]: eth0: DHCPv6 lease lost Dec 13 04:10:02.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.169491 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 04:10:02.169591 systemd[1]: Stopped systemd-networkd.service. Dec 13 04:10:02.175000 audit: BPF prog-id=9 op=UNLOAD Dec 13 04:10:02.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.170196 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 04:10:02.170247 systemd[1]: Closed systemd-networkd.socket. Dec 13 04:10:02.172077 systemd[1]: Stopping network-cleanup.service... Dec 13 04:10:02.172529 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 04:10:02.172597 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 04:10:02.173144 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:10:02.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.173186 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:10:02.174778 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 04:10:02.174816 systemd[1]: Stopped systemd-modules-load.service. Dec 13 04:10:02.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.179433 systemd[1]: Stopping systemd-udevd.service... Dec 13 04:10:02.180953 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 04:10:02.181420 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 04:10:02.190000 audit: BPF prog-id=6 op=UNLOAD Dec 13 04:10:02.181523 systemd[1]: Stopped systemd-resolved.service. Dec 13 04:10:02.186311 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 04:10:02.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.186435 systemd[1]: Stopped systemd-udevd.service. Dec 13 04:10:02.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.189398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 04:10:02.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.189439 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 04:10:02.190194 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 04:10:02.190235 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 04:10:02.192576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 04:10:02.192614 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 04:10:02.193523 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 04:10:02.193558 systemd[1]: Stopped dracut-cmdline.service. Dec 13 04:10:02.194450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 04:10:02.194485 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 04:10:02.195970 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 04:10:02.202589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 04:10:02.202635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 04:10:02.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.204389 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 04:10:02.204437 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 04:10:02.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.205944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 04:10:02.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.205982 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 04:10:02.208195 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 04:10:02.209664 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 04:10:02.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.209752 systemd[1]: Stopped network-cleanup.service. Dec 13 04:10:02.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.210392 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 04:10:02.210462 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 04:10:02.449435 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 04:10:02.449660 systemd[1]: Stopped sysroot-boot.service. Dec 13 04:10:02.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.452472 systemd[1]: Reached target initrd-switch-root.target. Dec 13 04:10:02.454372 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 04:10:02.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:02.454469 systemd[1]: Stopped initrd-setup-root.service. Dec 13 04:10:02.458199 systemd[1]: Starting initrd-switch-root.service... Dec 13 04:10:02.502299 systemd[1]: Switching root. Dec 13 04:10:02.542957 iscsid[637]: iscsid shutting down. Dec 13 04:10:02.544276 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Dec 13 04:10:02.544396 systemd-journald[184]: Journal stopped Dec 13 04:10:06.623455 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 04:10:06.626070 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 04:10:06.626094 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 04:10:06.626111 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 04:10:06.626123 kernel: SELinux: policy capability open_perms=1 Dec 13 04:10:06.626138 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 04:10:06.626151 kernel: SELinux: policy capability always_check_network=0 Dec 13 04:10:06.626162 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 04:10:06.626174 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 04:10:06.626185 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 04:10:06.626198 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 04:10:06.626230 systemd[1]: Successfully loaded SELinux policy in 89.523ms. Dec 13 04:10:06.626251 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.825ms. Dec 13 04:10:06.626265 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 04:10:06.626278 systemd[1]: Detected virtualization kvm. Dec 13 04:10:06.626290 systemd[1]: Detected architecture x86-64. Dec 13 04:10:06.626302 systemd[1]: Detected first boot. Dec 13 04:10:06.626315 systemd[1]: Hostname set to . Dec 13 04:10:06.626330 systemd[1]: Initializing machine ID from VM UUID. Dec 13 04:10:06.626342 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 04:10:06.626446 systemd[1]: Populated /etc with preset unit settings. Dec 13 04:10:06.626464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:10:06.626477 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:10:06.626492 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:10:06.626505 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 04:10:06.626525 systemd[1]: Stopped iscsid.service. Dec 13 04:10:06.626538 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 04:10:06.626550 systemd[1]: Stopped initrd-switch-root.service. Dec 13 04:10:06.626563 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 04:10:06.626576 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 04:10:06.626588 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 04:10:06.626607 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 04:10:06.626621 systemd[1]: Created slice system-getty.slice. Dec 13 04:10:06.626635 systemd[1]: Created slice system-modprobe.slice. Dec 13 04:10:06.626648 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 04:10:06.627341 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 04:10:06.627357 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 04:10:06.627369 systemd[1]: Created slice user.slice. Dec 13 04:10:06.627381 systemd[1]: Started systemd-ask-password-console.path. Dec 13 04:10:06.627392 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 04:10:06.627407 systemd[1]: Set up automount boot.automount. Dec 13 04:10:06.627419 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 04:10:06.627431 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 04:10:06.627442 systemd[1]: Stopped target initrd-fs.target. Dec 13 04:10:06.627454 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 04:10:06.627465 systemd[1]: Reached target integritysetup.target. Dec 13 04:10:06.627477 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 04:10:06.627488 systemd[1]: Reached target remote-fs.target. Dec 13 04:10:06.627501 systemd[1]: Reached target slices.target. Dec 13 04:10:06.627513 systemd[1]: Reached target swap.target. Dec 13 04:10:06.627524 systemd[1]: Reached target torcx.target. Dec 13 04:10:06.627536 systemd[1]: Reached target veritysetup.target. Dec 13 04:10:06.627548 systemd[1]: Listening on systemd-coredump.socket. Dec 13 04:10:06.627559 systemd[1]: Listening on systemd-initctl.socket. Dec 13 04:10:06.627570 systemd[1]: Listening on systemd-networkd.socket. Dec 13 04:10:06.627582 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 04:10:06.627594 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 04:10:06.627606 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 04:10:06.627619 systemd[1]: Mounting dev-hugepages.mount... Dec 13 04:10:06.627631 systemd[1]: Mounting dev-mqueue.mount... Dec 13 04:10:06.627642 systemd[1]: Mounting media.mount... Dec 13 04:10:06.627654 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:06.627665 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 04:10:06.627677 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 04:10:06.627689 systemd[1]: Mounting tmp.mount... Dec 13 04:10:06.627700 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 04:10:06.627712 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:10:06.627726 systemd[1]: Starting kmod-static-nodes.service... Dec 13 04:10:06.627738 systemd[1]: Starting modprobe@configfs.service... Dec 13 04:10:06.627750 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:10:06.627762 systemd[1]: Starting modprobe@drm.service... Dec 13 04:10:06.627774 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:10:06.627785 systemd[1]: Starting modprobe@fuse.service... Dec 13 04:10:06.627797 systemd[1]: Starting modprobe@loop.service... Dec 13 04:10:06.627809 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 04:10:06.627822 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 04:10:06.627835 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 04:10:06.627847 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 04:10:06.627858 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 04:10:06.627870 systemd[1]: Stopped systemd-journald.service. Dec 13 04:10:06.627881 systemd[1]: Starting systemd-journald.service... Dec 13 04:10:06.627893 systemd[1]: Starting systemd-modules-load.service... Dec 13 04:10:06.627905 systemd[1]: Starting systemd-network-generator.service... Dec 13 04:10:06.627916 systemd[1]: Starting systemd-remount-fs.service... Dec 13 04:10:06.627929 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 04:10:06.627941 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 04:10:06.627952 systemd[1]: Stopped verity-setup.service. Dec 13 04:10:06.627964 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:06.627977 systemd[1]: Mounted dev-hugepages.mount. Dec 13 04:10:06.627988 systemd[1]: Mounted dev-mqueue.mount. Dec 13 04:10:06.627999 systemd[1]: Mounted media.mount. Dec 13 04:10:06.628010 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 04:10:06.628021 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 04:10:06.628033 systemd[1]: Mounted tmp.mount. Dec 13 04:10:06.628047 systemd[1]: Finished kmod-static-nodes.service. Dec 13 04:10:06.628058 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 04:10:06.628070 systemd[1]: Finished modprobe@configfs.service. Dec 13 04:10:06.628082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:10:06.628096 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:10:06.628107 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:10:06.628122 systemd-journald[915]: Journal started Dec 13 04:10:06.628174 systemd-journald[915]: Runtime Journal (/run/log/journal/4ae3c69e96f0453b8e4a897d5f04710a) is 4.9M, max 39.5M, 34.5M free. Dec 13 04:10:02.842000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 04:10:02.943000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:10:02.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 04:10:02.944000 audit: BPF prog-id=10 op=LOAD Dec 13 04:10:02.944000 audit: BPF prog-id=10 op=UNLOAD Dec 13 04:10:02.944000 audit: BPF prog-id=11 op=LOAD Dec 13 04:10:02.944000 audit: BPF prog-id=11 op=UNLOAD Dec 13 04:10:03.095000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 04:10:03.095000 audit[849]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:10:03.095000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:10:03.097000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 04:10:03.097000 audit[849]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:10:03.097000 audit: CWD cwd="/" Dec 13 04:10:03.097000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:03.097000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:03.097000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 04:10:06.403000 audit: BPF prog-id=12 op=LOAD Dec 13 04:10:06.403000 audit: BPF prog-id=3 op=UNLOAD Dec 13 04:10:06.404000 audit: BPF prog-id=13 op=LOAD Dec 13 04:10:06.404000 audit: BPF prog-id=14 op=LOAD Dec 13 04:10:06.404000 audit: BPF prog-id=4 op=UNLOAD Dec 13 04:10:06.404000 audit: BPF prog-id=5 op=UNLOAD Dec 13 04:10:06.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.429000 audit: BPF prog-id=12 op=UNLOAD Dec 13 04:10:06.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.578000 audit: BPF prog-id=15 op=LOAD Dec 13 04:10:06.579000 audit: BPF prog-id=16 op=LOAD Dec 13 04:10:06.579000 audit: BPF prog-id=17 op=LOAD Dec 13 04:10:06.579000 audit: BPF prog-id=13 op=UNLOAD Dec 13 04:10:06.579000 audit: BPF prog-id=14 op=UNLOAD Dec 13 04:10:06.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.620000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 04:10:06.620000 audit[915]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffeab008b30 a2=4000 a3=7ffeab008bcc items=0 ppid=1 pid=915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:10:06.620000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 04:10:06.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.401801 systemd[1]: Queued start job for default target multi-user.target. Dec 13 04:10:03.090978 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:10:06.401815 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 04:10:03.091857 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:10:06.405612 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 04:10:03.091880 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:10:03.091930 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 04:10:03.091942 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 04:10:03.091975 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 04:10:03.091989 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 04:10:03.092248 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 04:10:03.092287 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 04:10:03.092302 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 04:10:03.094505 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 04:10:03.094545 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 04:10:03.094566 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 04:10:03.094583 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 04:10:03.094604 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 04:10:03.094619 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 04:10:06.631257 systemd[1]: Finished modprobe@drm.service. Dec 13 04:10:06.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:05.990754 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:10:05.991422 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:10:05.991672 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:10:05.992112 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 04:10:05.992292 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 04:10:05.992495 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T04:10:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 04:10:06.636449 systemd[1]: Started systemd-journald.service. Dec 13 04:10:06.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.634613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:10:06.634739 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:10:06.635419 systemd[1]: Finished systemd-modules-load.service. Dec 13 04:10:06.636041 systemd[1]: Finished systemd-network-generator.service. Dec 13 04:10:06.636715 systemd[1]: Finished systemd-remount-fs.service. Dec 13 04:10:06.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.638527 systemd[1]: Reached target network-pre.target. Dec 13 04:10:06.640986 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 04:10:06.643992 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 04:10:06.645445 kernel: fuse: init (API version 7.34) Dec 13 04:10:06.645482 kernel: loop: module loaded Dec 13 04:10:06.648505 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 04:10:06.649873 systemd[1]: Starting systemd-journal-flush.service... Dec 13 04:10:06.650374 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:10:06.651395 systemd[1]: Starting systemd-random-seed.service... Dec 13 04:10:06.652861 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:10:06.654852 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 04:10:06.654986 systemd[1]: Finished modprobe@fuse.service. Dec 13 04:10:06.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.659381 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:10:06.659511 systemd[1]: Finished modprobe@loop.service. Dec 13 04:10:06.659810 systemd-journald[915]: Time spent on flushing to /var/log/journal/4ae3c69e96f0453b8e4a897d5f04710a is 45.288ms for 1087 entries. Dec 13 04:10:06.659810 systemd-journald[915]: System Journal (/var/log/journal/4ae3c69e96f0453b8e4a897d5f04710a) is 8.0M, max 584.8M, 576.8M free. Dec 13 04:10:06.739457 systemd-journald[915]: Received client request to flush runtime journal. Dec 13 04:10:06.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.662511 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 04:10:06.665011 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 04:10:06.667977 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:10:06.668892 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 04:10:06.683694 systemd[1]: Finished systemd-random-seed.service. Dec 13 04:10:06.684271 systemd[1]: Reached target first-boot-complete.target. Dec 13 04:10:06.691936 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:10:06.740727 systemd[1]: Finished systemd-journal-flush.service. Dec 13 04:10:06.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.742723 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 04:10:06.744253 systemd[1]: Starting systemd-udev-settle.service... Dec 13 04:10:06.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.752924 udevadm[954]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 04:10:06.759251 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 04:10:06.760884 systemd[1]: Starting systemd-sysusers.service... Dec 13 04:10:06.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.793374 systemd[1]: Finished systemd-sysusers.service. Dec 13 04:10:06.794838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 04:10:06.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:06.841058 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 04:10:06.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:07.365977 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 04:10:07.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:07.387343 kernel: kauditd_printk_skb: 96 callbacks suppressed Dec 13 04:10:07.387503 kernel: audit: type=1130 audit(1734063007.366:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:07.387000 audit: BPF prog-id=18 op=LOAD Dec 13 04:10:07.390679 kernel: audit: type=1334 audit(1734063007.387:136): prog-id=18 op=LOAD Dec 13 04:10:07.390000 audit: BPF prog-id=19 op=LOAD Dec 13 04:10:07.393836 kernel: audit: type=1334 audit(1734063007.390:137): prog-id=19 op=LOAD Dec 13 04:10:07.393931 kernel: audit: type=1334 audit(1734063007.390:138): prog-id=7 op=UNLOAD Dec 13 04:10:07.390000 audit: BPF prog-id=7 op=UNLOAD Dec 13 04:10:07.396925 kernel: audit: type=1334 audit(1734063007.390:139): prog-id=8 op=UNLOAD Dec 13 04:10:07.390000 audit: BPF prog-id=8 op=UNLOAD Dec 13 04:10:07.394494 systemd[1]: Starting systemd-udevd.service... Dec 13 04:10:07.427671 systemd-udevd[963]: Using default interface naming scheme 'v252'. Dec 13 04:10:07.461399 systemd[1]: Started systemd-udevd.service. Dec 13 04:10:07.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:07.477267 kernel: audit: type=1130 audit(1734063007.463:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:07.477857 systemd[1]: Starting systemd-networkd.service... Dec 13 04:10:07.467000 audit: BPF prog-id=20 op=LOAD Dec 13 04:10:07.492249 kernel: audit: type=1334 audit(1734063007.467:141): prog-id=20 op=LOAD Dec 13 04:10:07.507000 audit: BPF prog-id=21 op=LOAD Dec 13 04:10:07.508000 audit: BPF prog-id=22 op=LOAD Dec 13 04:10:07.517401 kernel: audit: type=1334 audit(1734063007.507:142): prog-id=21 op=LOAD Dec 13 04:10:07.517469 kernel: audit: type=1334 audit(1734063007.508:143): prog-id=22 op=LOAD Dec 13 04:10:07.517183 systemd[1]: Starting systemd-userdbd.service... Dec 13 04:10:07.508000 audit: BPF prog-id=23 op=LOAD Dec 13 04:10:07.525340 kernel: audit: type=1334 audit(1734063007.508:144): prog-id=23 op=LOAD Dec 13 04:10:07.534027 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 04:10:07.570073 systemd[1]: Started systemd-userdbd.service. Dec 13 04:10:07.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:07.601235 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 04:10:07.607000 audit[970]: AVC avc: denied { confidentiality } for pid=970 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 04:10:07.607000 audit[970]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555f96bde8d0 a1=337fc a2=7efc981bdbc5 a3=5 items=110 ppid=963 pid=970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:10:07.607000 audit: CWD cwd="/" Dec 13 04:10:07.607000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=1 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=2 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=3 name=(null) inode=14711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=4 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=5 name=(null) inode=14712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=6 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=7 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=8 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=9 name=(null) inode=14714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=10 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=11 name=(null) inode=14715 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=12 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=13 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=14 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=15 name=(null) inode=14717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=16 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=17 name=(null) inode=14718 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=18 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=19 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=20 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=21 name=(null) inode=14720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=22 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=23 name=(null) inode=14721 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=24 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=25 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=26 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=27 name=(null) inode=14723 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=28 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=29 name=(null) inode=14724 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=30 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=31 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=32 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=33 name=(null) inode=14726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=34 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=35 name=(null) inode=14727 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=36 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=37 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=38 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=39 name=(null) inode=14729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=40 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=41 name=(null) inode=14730 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=42 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=43 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=44 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=45 name=(null) inode=14732 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.634232 kernel: ACPI: button: Power Button [PWRF] Dec 13 04:10:07.607000 audit: PATH item=46 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=47 name=(null) inode=14733 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=48 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=49 name=(null) inode=14734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=50 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=51 name=(null) inode=14735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=52 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=53 name=(null) inode=14736 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=55 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=56 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=57 name=(null) inode=14738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=58 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=59 name=(null) inode=14739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=60 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=61 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=62 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=63 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=64 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=65 name=(null) inode=14742 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=66 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=67 name=(null) inode=14743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=68 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=69 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=70 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=71 name=(null) inode=14745 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=72 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=73 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=74 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=75 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=76 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=77 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=78 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=79 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=80 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=81 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=82 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=83 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=84 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=85 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=86 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=87 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=88 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=89 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=90 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=91 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=92 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=93 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=94 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=95 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=96 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=97 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=98 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=99 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=100 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=101 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=102 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=103 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=104 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=105 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=106 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=107 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PATH item=109 name=(null) inode=14764 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 04:10:07.607000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 04:10:07.656578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 04:10:07.774230 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 04:10:07.800234 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 04:10:07.916275 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 04:10:08.075599 systemd-networkd[979]: lo: Link UP Dec 13 04:10:08.076272 systemd-networkd[979]: lo: Gained carrier Dec 13 04:10:08.077530 systemd-networkd[979]: Enumeration completed Dec 13 04:10:08.077947 systemd[1]: Started systemd-networkd.service. Dec 13 04:10:08.078271 systemd-networkd[979]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 04:10:08.081766 systemd-networkd[979]: eth0: Link UP Dec 13 04:10:08.081952 systemd-networkd[979]: eth0: Gained carrier Dec 13 04:10:08.096465 systemd-networkd[979]: eth0: DHCPv4 address 172.24.4.188/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 04:10:08.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.139628 systemd[1]: Finished systemd-udev-settle.service. Dec 13 04:10:08.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.141256 systemd[1]: Starting lvm2-activation-early.service... Dec 13 04:10:08.185267 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:10:08.229021 systemd[1]: Finished lvm2-activation-early.service. Dec 13 04:10:08.230506 systemd[1]: Reached target cryptsetup.target. Dec 13 04:10:08.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.233877 systemd[1]: Starting lvm2-activation.service... Dec 13 04:10:08.244333 lvm[993]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 04:10:08.282301 systemd[1]: Finished lvm2-activation.service. Dec 13 04:10:08.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.283608 systemd[1]: Reached target local-fs-pre.target. Dec 13 04:10:08.284783 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 04:10:08.284848 systemd[1]: Reached target local-fs.target. Dec 13 04:10:08.286012 systemd[1]: Reached target machines.target. Dec 13 04:10:08.289726 systemd[1]: Starting ldconfig.service... Dec 13 04:10:08.291946 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:10:08.292068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:08.294491 systemd[1]: Starting systemd-boot-update.service... Dec 13 04:10:08.297691 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 04:10:08.302857 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 04:10:08.312416 systemd[1]: Starting systemd-sysext.service... Dec 13 04:10:08.328288 systemd[1]: boot.automount: Got automount request for /boot, triggered by 995 (bootctl) Dec 13 04:10:08.330845 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 04:10:08.451109 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 04:10:08.456283 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 04:10:08.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.477845 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 04:10:08.478302 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 04:10:08.688295 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 04:10:08.750979 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 04:10:08.753586 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 04:10:08.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.816295 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 04:10:08.847270 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 04:10:08.895415 (sd-sysext)[1007]: Using extensions 'kubernetes'. Dec 13 04:10:08.896928 (sd-sysext)[1007]: Merged extensions into '/usr'. Dec 13 04:10:08.932437 systemd-fsck[1004]: fsck.fat 4.2 (2021-01-31) Dec 13 04:10:08.932437 systemd-fsck[1004]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 04:10:08.948176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 04:10:08.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.951133 systemd[1]: Mounting boot.mount... Dec 13 04:10:08.951596 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:08.957627 systemd[1]: Mounting usr-share-oem.mount... Dec 13 04:10:08.958324 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:10:08.960782 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:10:08.963496 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:10:08.965752 systemd[1]: Starting modprobe@loop.service... Dec 13 04:10:08.966316 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:10:08.966459 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:08.966607 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:08.969852 systemd[1]: Mounted usr-share-oem.mount. Dec 13 04:10:08.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.971799 systemd[1]: Finished systemd-sysext.service. Dec 13 04:10:08.972510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:10:08.972630 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:10:08.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.977043 systemd[1]: Starting ensure-sysext.service... Dec 13 04:10:08.983979 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 04:10:08.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:08.987360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:10:08.987494 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:10:08.988292 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:10:08.988408 systemd[1]: Finished modprobe@loop.service. Dec 13 04:10:08.991940 systemd[1]: Reloading. Dec 13 04:10:08.997332 systemd-tmpfiles[1015]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 04:10:09.002720 systemd-tmpfiles[1015]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 04:10:09.005491 systemd-tmpfiles[1015]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 04:10:09.086308 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-12-13T04:10:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:10:09.087478 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-12-13T04:10:09Z" level=info msg="torcx already run" Dec 13 04:10:09.211901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:10:09.211924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:10:09.239223 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:10:09.309000 audit: BPF prog-id=24 op=LOAD Dec 13 04:10:09.309000 audit: BPF prog-id=20 op=UNLOAD Dec 13 04:10:09.311000 audit: BPF prog-id=25 op=LOAD Dec 13 04:10:09.311000 audit: BPF prog-id=15 op=UNLOAD Dec 13 04:10:09.312000 audit: BPF prog-id=26 op=LOAD Dec 13 04:10:09.312000 audit: BPF prog-id=27 op=LOAD Dec 13 04:10:09.312000 audit: BPF prog-id=16 op=UNLOAD Dec 13 04:10:09.312000 audit: BPF prog-id=17 op=UNLOAD Dec 13 04:10:09.313000 audit: BPF prog-id=28 op=LOAD Dec 13 04:10:09.313000 audit: BPF prog-id=21 op=UNLOAD Dec 13 04:10:09.313000 audit: BPF prog-id=29 op=LOAD Dec 13 04:10:09.313000 audit: BPF prog-id=30 op=LOAD Dec 13 04:10:09.313000 audit: BPF prog-id=22 op=UNLOAD Dec 13 04:10:09.313000 audit: BPF prog-id=23 op=UNLOAD Dec 13 04:10:09.314000 audit: BPF prog-id=31 op=LOAD Dec 13 04:10:09.314000 audit: BPF prog-id=32 op=LOAD Dec 13 04:10:09.314000 audit: BPF prog-id=18 op=UNLOAD Dec 13 04:10:09.314000 audit: BPF prog-id=19 op=UNLOAD Dec 13 04:10:09.323983 systemd[1]: Mounted boot.mount. Dec 13 04:10:09.327483 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:10:09.327535 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.342759 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:09.343009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.344471 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:10:09.346101 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:10:09.349441 systemd[1]: Starting modprobe@loop.service... Dec 13 04:10:09.350028 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.350168 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:09.350339 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:09.351338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:10:09.351503 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:10:09.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.352433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:10:09.352549 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:10:09.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.355275 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:10:09.357197 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:09.357492 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.359844 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:10:09.361740 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 04:10:09.363337 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.363464 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:09.363590 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:09.364514 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:10:09.364659 systemd[1]: Finished modprobe@loop.service. Dec 13 04:10:09.365922 ldconfig[994]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 04:10:09.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.366295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:10:09.366417 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:10:09.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.367276 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.370422 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:09.370683 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.372853 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 04:10:09.374620 systemd[1]: Starting modprobe@drm.service... Dec 13 04:10:09.377919 systemd[1]: Starting modprobe@loop.service... Dec 13 04:10:09.378492 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.378607 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:09.380481 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 04:10:09.381135 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 04:10:09.382726 systemd[1]: Finished systemd-boot-update.service. Dec 13 04:10:09.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.383966 systemd[1]: Finished ldconfig.service. Dec 13 04:10:09.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.384820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 04:10:09.384951 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 04:10:09.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.385885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 04:10:09.386006 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 04:10:09.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.387521 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 04:10:09.387672 systemd[1]: Finished modprobe@drm.service. Dec 13 04:10:09.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.390854 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 04:10:09.390990 systemd[1]: Finished modprobe@loop.service. Dec 13 04:10:09.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.393084 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 04:10:09.393200 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.396176 systemd[1]: Finished ensure-sysext.service. Dec 13 04:10:09.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.439003 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 04:10:09.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.440941 systemd[1]: Starting audit-rules.service... Dec 13 04:10:09.442653 systemd[1]: Starting clean-ca-certificates.service... Dec 13 04:10:09.445349 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 04:10:09.446000 audit: BPF prog-id=33 op=LOAD Dec 13 04:10:09.448431 systemd[1]: Starting systemd-resolved.service... Dec 13 04:10:09.451000 audit: BPF prog-id=34 op=LOAD Dec 13 04:10:09.454388 systemd[1]: Starting systemd-timesyncd.service... Dec 13 04:10:09.456478 systemd[1]: Starting systemd-update-utmp.service... Dec 13 04:10:09.458009 systemd[1]: Finished clean-ca-certificates.service. Dec 13 04:10:09.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.458790 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 04:10:09.474000 audit[1097]: SYSTEM_BOOT pid=1097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.476543 systemd[1]: Finished systemd-update-utmp.service. Dec 13 04:10:09.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.490748 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 04:10:09.492596 systemd[1]: Starting systemd-update-done.service... Dec 13 04:10:09.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.499832 systemd[1]: Finished systemd-update-done.service. Dec 13 04:10:09.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 04:10:09.529000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 04:10:09.529000 audit[1112]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9d01ef80 a2=420 a3=0 items=0 ppid=1091 pid=1112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 04:10:09.529000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 04:10:09.530004 augenrules[1112]: No rules Dec 13 04:10:09.530373 systemd[1]: Finished audit-rules.service. Dec 13 04:10:09.542304 systemd[1]: Started systemd-timesyncd.service. Dec 13 04:10:09.542950 systemd[1]: Reached target time-set.target. Dec 13 04:10:09.546024 systemd-resolved[1094]: Positive Trust Anchors: Dec 13 04:10:09.546042 systemd-resolved[1094]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 04:10:09.546078 systemd-resolved[1094]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 04:10:09.554344 systemd-resolved[1094]: Using system hostname 'ci-3510-3-6-e-a81afd2c25.novalocal'. Dec 13 04:10:09.555952 systemd[1]: Started systemd-resolved.service. Dec 13 04:10:09.556541 systemd[1]: Reached target network.target. Dec 13 04:10:09.556962 systemd[1]: Reached target nss-lookup.target. Dec 13 04:10:09.557482 systemd[1]: Reached target sysinit.target. Dec 13 04:10:09.558029 systemd[1]: Started motdgen.path. Dec 13 04:10:09.558512 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 04:10:09.559221 systemd[1]: Started logrotate.timer. Dec 13 04:10:09.559745 systemd[1]: Started mdadm.timer. Dec 13 04:10:09.560132 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 04:10:09.560573 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 04:10:09.560606 systemd[1]: Reached target paths.target. Dec 13 04:10:09.561008 systemd[1]: Reached target timers.target. Dec 13 04:10:09.561745 systemd[1]: Listening on dbus.socket. Dec 13 04:10:09.563395 systemd[1]: Starting docker.socket... Dec 13 04:10:09.567129 systemd[1]: Listening on sshd.socket. Dec 13 04:10:09.567672 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:09.568080 systemd[1]: Listening on docker.socket. Dec 13 04:10:09.568575 systemd[1]: Reached target sockets.target. Dec 13 04:10:09.568998 systemd[1]: Reached target basic.target. Dec 13 04:10:09.569481 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.569516 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 04:10:09.570517 systemd[1]: Starting containerd.service... Dec 13 04:10:09.572790 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 04:10:09.576544 systemd[1]: Starting dbus.service... Dec 13 04:10:09.577989 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 04:10:09.579528 systemd[1]: Starting extend-filesystems.service... Dec 13 04:10:09.580694 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 04:10:09.584851 systemd[1]: Starting motdgen.service... Dec 13 04:10:09.587019 systemd[1]: Starting prepare-helm.service... Dec 13 04:10:09.589986 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 04:10:09.591958 systemd[1]: Starting sshd-keygen.service... Dec 13 04:10:09.596697 systemd[1]: Starting systemd-logind.service... Dec 13 04:10:09.597176 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 04:10:09.597315 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 04:10:09.612915 jq[1135]: true Dec 13 04:10:09.597761 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 04:10:09.614395 jq[1125]: false Dec 13 04:10:09.601341 systemd[1]: Starting update-engine.service... Dec 13 04:10:09.603592 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 04:10:09.614477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 04:10:09.614730 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 04:10:09.627710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 04:10:09.627888 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 04:10:09.641342 jq[1139]: true Dec 13 04:10:09.647419 systemd-timesyncd[1096]: Contacted time server 82.64.42.185:123 (0.flatcar.pool.ntp.org). Dec 13 04:10:09.647490 systemd-timesyncd[1096]: Initial clock synchronization to Fri 2024-12-13 04:10:09.444701 UTC. Dec 13 04:10:09.648160 tar[1137]: linux-amd64/helm Dec 13 04:10:09.656537 extend-filesystems[1126]: Found loop1 Dec 13 04:10:09.657591 extend-filesystems[1126]: Found vda Dec 13 04:10:09.658143 extend-filesystems[1126]: Found vda1 Dec 13 04:10:09.658736 extend-filesystems[1126]: Found vda2 Dec 13 04:10:09.659664 extend-filesystems[1126]: Found vda3 Dec 13 04:10:09.659664 extend-filesystems[1126]: Found usr Dec 13 04:10:09.659664 extend-filesystems[1126]: Found vda4 Dec 13 04:10:09.659664 extend-filesystems[1126]: Found vda6 Dec 13 04:10:09.659664 extend-filesystems[1126]: Found vda7 Dec 13 04:10:09.659664 extend-filesystems[1126]: Found vda9 Dec 13 04:10:09.659664 extend-filesystems[1126]: Checking size of /dev/vda9 Dec 13 04:10:09.667241 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 04:10:09.667441 systemd[1]: Finished motdgen.service. Dec 13 04:10:09.670386 systemd-networkd[979]: eth0: Gained IPv6LL Dec 13 04:10:09.672136 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 04:10:09.672706 systemd[1]: Reached target network-online.target. Dec 13 04:10:09.676562 dbus-daemon[1123]: [system] SELinux support is enabled Dec 13 04:10:09.674907 systemd[1]: Starting kubelet.service... Dec 13 04:10:09.676709 systemd[1]: Started dbus.service. Dec 13 04:10:09.680373 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 04:10:09.680412 systemd[1]: Reached target system-config.target. Dec 13 04:10:09.680953 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 04:10:09.680972 systemd[1]: Reached target user-config.target. Dec 13 04:10:09.723908 extend-filesystems[1126]: Resized partition /dev/vda9 Dec 13 04:10:09.737423 extend-filesystems[1177]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 04:10:09.759259 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 04:10:09.804164 update_engine[1134]: I1213 04:10:09.803243 1134 main.cc:92] Flatcar Update Engine starting Dec 13 04:10:09.879114 update_engine[1134]: I1213 04:10:09.811442 1134 update_check_scheduler.cc:74] Next update check in 8m9s Dec 13 04:10:09.879234 env[1138]: time="2024-12-13T04:10:09.877790111Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 04:10:09.809414 systemd[1]: Started update-engine.service. Dec 13 04:10:09.811953 systemd[1]: Started locksmithd.service. Dec 13 04:10:09.879803 systemd-logind[1133]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 04:10:09.879849 systemd-logind[1133]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 04:10:09.881456 systemd-logind[1133]: New seat seat0. Dec 13 04:10:09.889032 systemd[1]: Started systemd-logind.service. Dec 13 04:10:09.898790 bash[1174]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:10:09.897725 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 04:10:09.904233 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 04:10:09.933098 env[1138]: time="2024-12-13T04:10:09.933057836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 04:10:10.000339 extend-filesystems[1177]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 04:10:10.000339 extend-filesystems[1177]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 04:10:10.000339 extend-filesystems[1177]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 04:10:10.004024 extend-filesystems[1126]: Resized filesystem in /dev/vda9 Dec 13 04:10:10.001810 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 04:10:10.005387 env[1138]: time="2024-12-13T04:10:10.000776792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:10:10.001965 systemd[1]: Finished extend-filesystems.service. Dec 13 04:10:10.008737 env[1138]: time="2024-12-13T04:10:10.008640558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:10:10.008783 env[1138]: time="2024-12-13T04:10:10.008728746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:10:10.010084 env[1138]: time="2024-12-13T04:10:10.009168426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:10:10.010154 env[1138]: time="2024-12-13T04:10:10.010092665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 04:10:10.010154 env[1138]: time="2024-12-13T04:10:10.010137603Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 04:10:10.010265 env[1138]: time="2024-12-13T04:10:10.010165697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 04:10:10.010453 env[1138]: time="2024-12-13T04:10:10.010412362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:10:10.013005 env[1138]: time="2024-12-13T04:10:10.012959634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 04:10:10.013396 env[1138]: time="2024-12-13T04:10:10.013340596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 04:10:10.013438 env[1138]: time="2024-12-13T04:10:10.013400397Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 04:10:10.013573 env[1138]: time="2024-12-13T04:10:10.013530497Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 04:10:10.016077 env[1138]: time="2024-12-13T04:10:10.013573775Z" level=info msg="metadata content store policy set" policy=shared Dec 13 04:10:10.035658 env[1138]: time="2024-12-13T04:10:10.035599187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 04:10:10.035731 env[1138]: time="2024-12-13T04:10:10.035682795Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 04:10:10.035762 env[1138]: time="2024-12-13T04:10:10.035727744Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 04:10:10.035869 env[1138]: time="2024-12-13T04:10:10.035830432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036028 env[1138]: time="2024-12-13T04:10:10.035990598Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036066 env[1138]: time="2024-12-13T04:10:10.036046005Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036108 env[1138]: time="2024-12-13T04:10:10.036083590Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036139 env[1138]: time="2024-12-13T04:10:10.036119614Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036186 env[1138]: time="2024-12-13T04:10:10.036154485Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036285 env[1138]: time="2024-12-13T04:10:10.036249127Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036330 env[1138]: time="2024-12-13T04:10:10.036302356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.036368 env[1138]: time="2024-12-13T04:10:10.036335899Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 04:10:10.036616 env[1138]: time="2024-12-13T04:10:10.036572984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 04:10:10.036832 env[1138]: time="2024-12-13T04:10:10.036781682Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 04:10:10.037747 env[1138]: time="2024-12-13T04:10:10.037696429Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 04:10:10.037832 env[1138]: time="2024-12-13T04:10:10.037796765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.037887 env[1138]: time="2024-12-13T04:10:10.037845189Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 04:10:10.038019 env[1138]: time="2024-12-13T04:10:10.037969595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038162 env[1138]: time="2024-12-13T04:10:10.038123678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038219 env[1138]: time="2024-12-13T04:10:10.038174211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038292 env[1138]: time="2024-12-13T04:10:10.038255866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038337 env[1138]: time="2024-12-13T04:10:10.038306840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038368 env[1138]: time="2024-12-13T04:10:10.038340451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038406 env[1138]: time="2024-12-13T04:10:10.038372070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038458 env[1138]: time="2024-12-13T04:10:10.038408270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038458 env[1138]: time="2024-12-13T04:10:10.038447398Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 04:10:10.038775 env[1138]: time="2024-12-13T04:10:10.038732693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038812 env[1138]: time="2024-12-13T04:10:10.038788158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038856 env[1138]: time="2024-12-13T04:10:10.038821652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.038886 env[1138]: time="2024-12-13T04:10:10.038853173Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 04:10:10.038931 env[1138]: time="2024-12-13T04:10:10.038890436Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 04:10:10.038966 env[1138]: time="2024-12-13T04:10:10.038929721Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 04:10:10.039002 env[1138]: time="2024-12-13T04:10:10.038970275Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 04:10:10.039088 env[1138]: time="2024-12-13T04:10:10.039056012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 04:10:10.042472 env[1138]: time="2024-12-13T04:10:10.042272257Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 04:10:10.047295 env[1138]: time="2024-12-13T04:10:10.042501920Z" level=info msg="Connect containerd service" Dec 13 04:10:10.047295 env[1138]: time="2024-12-13T04:10:10.045477369Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 04:10:10.049006 env[1138]: time="2024-12-13T04:10:10.048948978Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:10:10.049500 env[1138]: time="2024-12-13T04:10:10.049457950Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 04:10:10.049608 env[1138]: time="2024-12-13T04:10:10.049571879Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 04:10:10.050463 env[1138]: time="2024-12-13T04:10:10.049689772Z" level=info msg="containerd successfully booted in 0.276795s" Dec 13 04:10:10.049759 systemd[1]: Started containerd.service. Dec 13 04:10:10.089386 env[1138]: time="2024-12-13T04:10:10.089334141Z" level=info msg="Start subscribing containerd event" Dec 13 04:10:10.089553 env[1138]: time="2024-12-13T04:10:10.089538016Z" level=info msg="Start recovering state" Dec 13 04:10:10.089682 env[1138]: time="2024-12-13T04:10:10.089668759Z" level=info msg="Start event monitor" Dec 13 04:10:10.089759 env[1138]: time="2024-12-13T04:10:10.089740874Z" level=info msg="Start snapshots syncer" Dec 13 04:10:10.089833 env[1138]: time="2024-12-13T04:10:10.089819844Z" level=info msg="Start cni network conf syncer for default" Dec 13 04:10:10.089895 env[1138]: time="2024-12-13T04:10:10.089877828Z" level=info msg="Start streaming server" Dec 13 04:10:10.494739 tar[1137]: linux-amd64/LICENSE Dec 13 04:10:10.494739 tar[1137]: linux-amd64/README.md Dec 13 04:10:10.499522 systemd[1]: Finished prepare-helm.service. Dec 13 04:10:10.513282 locksmithd[1182]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 04:10:10.934046 sshd_keygen[1146]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 04:10:10.969771 systemd[1]: Finished sshd-keygen.service. Dec 13 04:10:10.971845 systemd[1]: Starting issuegen.service... Dec 13 04:10:10.977982 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 04:10:10.978138 systemd[1]: Finished issuegen.service. Dec 13 04:10:10.980092 systemd[1]: Starting systemd-user-sessions.service... Dec 13 04:10:10.988275 systemd[1]: Finished systemd-user-sessions.service. Dec 13 04:10:10.990303 systemd[1]: Started getty@tty1.service. Dec 13 04:10:10.993618 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 04:10:10.994314 systemd[1]: Reached target getty.target. Dec 13 04:10:11.380849 systemd[1]: Started kubelet.service. Dec 13 04:10:12.792430 kubelet[1207]: E1213 04:10:12.792303 1207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:10:12.796097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:10:12.796503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:10:12.797061 systemd[1]: kubelet.service: Consumed 1.832s CPU time. Dec 13 04:10:16.694776 coreos-metadata[1121]: Dec 13 04:10:16.694 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:10:16.781235 coreos-metadata[1121]: Dec 13 04:10:16.781 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 04:10:17.197395 coreos-metadata[1121]: Dec 13 04:10:17.197 INFO Fetch successful Dec 13 04:10:17.197527 coreos-metadata[1121]: Dec 13 04:10:17.197 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 04:10:17.215713 coreos-metadata[1121]: Dec 13 04:10:17.215 INFO Fetch successful Dec 13 04:10:17.220116 unknown[1121]: wrote ssh authorized keys file for user: core Dec 13 04:10:17.248899 update-ssh-keys[1218]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:10:17.250147 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 04:10:17.250551 systemd[1]: Reached target multi-user.target. Dec 13 04:10:17.251914 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 04:10:17.260552 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 04:10:17.260713 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 04:10:17.260926 systemd[1]: Startup finished in 954ms (kernel) + 7.982s (initrd) + 14.533s (userspace) = 23.469s. Dec 13 04:10:19.219956 systemd[1]: Created slice system-sshd.slice. Dec 13 04:10:19.222633 systemd[1]: Started sshd@0-172.24.4.188:22-172.24.4.1:52630.service. Dec 13 04:10:20.213864 sshd[1221]: Accepted publickey for core from 172.24.4.1 port 52630 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:10:20.218510 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:10:20.243487 systemd[1]: Created slice user-500.slice. Dec 13 04:10:20.245905 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 04:10:20.254289 systemd-logind[1133]: New session 1 of user core. Dec 13 04:10:20.268779 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 04:10:20.272599 systemd[1]: Starting user@500.service... Dec 13 04:10:20.280815 (systemd)[1224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:10:20.415565 systemd[1224]: Queued start job for default target default.target. Dec 13 04:10:20.416124 systemd[1224]: Reached target paths.target. Dec 13 04:10:20.416144 systemd[1224]: Reached target sockets.target. Dec 13 04:10:20.416158 systemd[1224]: Reached target timers.target. Dec 13 04:10:20.416172 systemd[1224]: Reached target basic.target. Dec 13 04:10:20.416240 systemd[1224]: Reached target default.target. Dec 13 04:10:20.416268 systemd[1224]: Startup finished in 122ms. Dec 13 04:10:20.417477 systemd[1]: Started user@500.service. Dec 13 04:10:20.420071 systemd[1]: Started session-1.scope. Dec 13 04:10:20.896277 systemd[1]: Started sshd@1-172.24.4.188:22-172.24.4.1:52642.service. Dec 13 04:10:22.463995 sshd[1233]: Accepted publickey for core from 172.24.4.1 port 52642 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:10:22.467865 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:10:22.478385 systemd-logind[1133]: New session 2 of user core. Dec 13 04:10:22.479268 systemd[1]: Started session-2.scope. Dec 13 04:10:22.936914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:10:22.937364 systemd[1]: Stopped kubelet.service. Dec 13 04:10:22.937441 systemd[1]: kubelet.service: Consumed 1.832s CPU time. Dec 13 04:10:22.940315 systemd[1]: Starting kubelet.service... Dec 13 04:10:23.079425 sshd[1233]: pam_unix(sshd:session): session closed for user core Dec 13 04:10:23.084042 systemd[1]: Started sshd@2-172.24.4.188:22-172.24.4.1:52658.service. Dec 13 04:10:23.085547 systemd[1]: sshd@1-172.24.4.188:22-172.24.4.1:52642.service: Deactivated successfully. Dec 13 04:10:23.086343 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 04:10:23.090501 systemd-logind[1133]: Session 2 logged out. Waiting for processes to exit. Dec 13 04:10:23.093299 systemd-logind[1133]: Removed session 2. Dec 13 04:10:23.132380 systemd[1]: Started kubelet.service. Dec 13 04:10:23.181813 kubelet[1244]: E1213 04:10:23.181702 1244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:10:23.188259 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:10:23.188401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:10:24.232990 sshd[1240]: Accepted publickey for core from 172.24.4.1 port 52658 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:10:24.236075 sshd[1240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:10:24.246826 systemd-logind[1133]: New session 3 of user core. Dec 13 04:10:24.247545 systemd[1]: Started session-3.scope. Dec 13 04:10:24.875927 sshd[1240]: pam_unix(sshd:session): session closed for user core Dec 13 04:10:24.883009 systemd[1]: Started sshd@3-172.24.4.188:22-172.24.4.1:39646.service. Dec 13 04:10:24.884301 systemd[1]: sshd@2-172.24.4.188:22-172.24.4.1:52658.service: Deactivated successfully. Dec 13 04:10:24.887530 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 04:10:24.889964 systemd-logind[1133]: Session 3 logged out. Waiting for processes to exit. Dec 13 04:10:24.892968 systemd-logind[1133]: Removed session 3. Dec 13 04:10:26.056913 sshd[1254]: Accepted publickey for core from 172.24.4.1 port 39646 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:10:26.060353 sshd[1254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:10:26.070701 systemd-logind[1133]: New session 4 of user core. Dec 13 04:10:26.071457 systemd[1]: Started session-4.scope. Dec 13 04:10:26.698280 sshd[1254]: pam_unix(sshd:session): session closed for user core Dec 13 04:10:26.706402 systemd[1]: sshd@3-172.24.4.188:22-172.24.4.1:39646.service: Deactivated successfully. Dec 13 04:10:26.707723 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 04:10:26.709152 systemd-logind[1133]: Session 4 logged out. Waiting for processes to exit. Dec 13 04:10:26.711726 systemd[1]: Started sshd@4-172.24.4.188:22-172.24.4.1:39650.service. Dec 13 04:10:26.715692 systemd-logind[1133]: Removed session 4. Dec 13 04:10:27.885587 sshd[1261]: Accepted publickey for core from 172.24.4.1 port 39650 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:10:27.889345 sshd[1261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:10:27.900352 systemd-logind[1133]: New session 5 of user core. Dec 13 04:10:27.901178 systemd[1]: Started session-5.scope. Dec 13 04:10:28.387108 sudo[1264]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:10:28.388441 sudo[1264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 04:10:28.443748 systemd[1]: Starting docker.service... Dec 13 04:10:28.519140 env[1274]: time="2024-12-13T04:10:28.519066496Z" level=info msg="Starting up" Dec 13 04:10:28.523003 env[1274]: time="2024-12-13T04:10:28.522962449Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 04:10:28.523155 env[1274]: time="2024-12-13T04:10:28.523127460Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 04:10:28.523364 env[1274]: time="2024-12-13T04:10:28.523328917Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 04:10:28.523483 env[1274]: time="2024-12-13T04:10:28.523459452Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 04:10:28.527500 env[1274]: time="2024-12-13T04:10:28.527421627Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 04:10:28.527500 env[1274]: time="2024-12-13T04:10:28.527479553Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 04:10:28.527647 env[1274]: time="2024-12-13T04:10:28.527519608Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 04:10:28.527647 env[1274]: time="2024-12-13T04:10:28.527560131Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 04:10:28.540040 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2183819533-merged.mount: Deactivated successfully. Dec 13 04:10:28.614772 env[1274]: time="2024-12-13T04:10:28.614578024Z" level=info msg="Loading containers: start." Dec 13 04:10:28.883283 kernel: Initializing XFRM netlink socket Dec 13 04:10:28.951529 env[1274]: time="2024-12-13T04:10:28.951477070Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 04:10:29.050335 systemd-networkd[979]: docker0: Link UP Dec 13 04:10:29.068452 env[1274]: time="2024-12-13T04:10:29.068398446Z" level=info msg="Loading containers: done." Dec 13 04:10:29.081629 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4937225-merged.mount: Deactivated successfully. Dec 13 04:10:29.095640 env[1274]: time="2024-12-13T04:10:29.095593219Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 04:10:29.095942 env[1274]: time="2024-12-13T04:10:29.095923187Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 04:10:29.096108 env[1274]: time="2024-12-13T04:10:29.096092555Z" level=info msg="Daemon has completed initialization" Dec 13 04:10:29.133589 systemd[1]: Started docker.service. Dec 13 04:10:29.160285 env[1274]: time="2024-12-13T04:10:29.159775318Z" level=info msg="API listen on /run/docker.sock" Dec 13 04:10:30.913589 env[1138]: time="2024-12-13T04:10:30.913489223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 04:10:31.654590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559696498.mount: Deactivated successfully. Dec 13 04:10:33.439242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 04:10:33.439452 systemd[1]: Stopped kubelet.service. Dec 13 04:10:33.441439 systemd[1]: Starting kubelet.service... Dec 13 04:10:33.525820 systemd[1]: Started kubelet.service. Dec 13 04:10:34.110901 kubelet[1402]: E1213 04:10:34.110862 1402 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:10:34.112582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:10:34.112704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:10:34.397256 env[1138]: time="2024-12-13T04:10:34.394821409Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:34.401436 env[1138]: time="2024-12-13T04:10:34.401358798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:34.404634 env[1138]: time="2024-12-13T04:10:34.404584706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:34.408400 env[1138]: time="2024-12-13T04:10:34.408379047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:34.410520 env[1138]: time="2024-12-13T04:10:34.410444399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 04:10:34.412716 env[1138]: time="2024-12-13T04:10:34.412693318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 04:10:36.969816 env[1138]: time="2024-12-13T04:10:36.969043982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:36.973025 env[1138]: time="2024-12-13T04:10:36.972990182Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:36.976346 env[1138]: time="2024-12-13T04:10:36.976323594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:36.980048 env[1138]: time="2024-12-13T04:10:36.980023895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:36.981771 env[1138]: time="2024-12-13T04:10:36.981744720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 04:10:36.982687 env[1138]: time="2024-12-13T04:10:36.982666215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 04:10:39.262869 env[1138]: time="2024-12-13T04:10:39.262735438Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:39.266382 env[1138]: time="2024-12-13T04:10:39.266311504Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:39.271092 env[1138]: time="2024-12-13T04:10:39.271022799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:39.275394 env[1138]: time="2024-12-13T04:10:39.275340574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:39.277637 env[1138]: time="2024-12-13T04:10:39.277576822Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 04:10:39.279057 env[1138]: time="2024-12-13T04:10:39.279009144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 04:10:40.926805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672762745.mount: Deactivated successfully. Dec 13 04:10:42.218753 env[1138]: time="2024-12-13T04:10:42.218629249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:42.222193 env[1138]: time="2024-12-13T04:10:42.222102666Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:42.225718 env[1138]: time="2024-12-13T04:10:42.225617895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:42.228764 env[1138]: time="2024-12-13T04:10:42.228700482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:42.229884 env[1138]: time="2024-12-13T04:10:42.229781293Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 04:10:42.231081 env[1138]: time="2024-12-13T04:10:42.231028981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 04:10:42.873655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648585335.mount: Deactivated successfully. Dec 13 04:10:44.363535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 04:10:44.363773 systemd[1]: Stopped kubelet.service. Dec 13 04:10:44.365967 systemd[1]: Starting kubelet.service... Dec 13 04:10:44.449407 systemd[1]: Started kubelet.service. Dec 13 04:10:44.507754 kubelet[1412]: E1213 04:10:44.507673 1412 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:10:44.509723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:10:44.509858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:10:44.947923 env[1138]: time="2024-12-13T04:10:44.947785639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:44.952036 env[1138]: time="2024-12-13T04:10:44.951942534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:44.959245 env[1138]: time="2024-12-13T04:10:44.959129651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:44.964370 env[1138]: time="2024-12-13T04:10:44.964299139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:44.967052 env[1138]: time="2024-12-13T04:10:44.966969995Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 04:10:44.969879 env[1138]: time="2024-12-13T04:10:44.969805439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 04:10:45.598941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669856754.mount: Deactivated successfully. Dec 13 04:10:45.613523 env[1138]: time="2024-12-13T04:10:45.613300444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:45.616386 env[1138]: time="2024-12-13T04:10:45.616312931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:45.619734 env[1138]: time="2024-12-13T04:10:45.619674463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:45.623131 env[1138]: time="2024-12-13T04:10:45.623061585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:45.624733 env[1138]: time="2024-12-13T04:10:45.624637674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 04:10:45.625946 env[1138]: time="2024-12-13T04:10:45.625865640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 04:10:46.257467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89785653.mount: Deactivated successfully. Dec 13 04:10:51.465979 env[1138]: time="2024-12-13T04:10:51.465885106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:51.469771 env[1138]: time="2024-12-13T04:10:51.469717175Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:51.474967 env[1138]: time="2024-12-13T04:10:51.474916688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:51.480055 env[1138]: time="2024-12-13T04:10:51.480002503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:51.482634 env[1138]: time="2024-12-13T04:10:51.482551772Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 04:10:54.672536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 04:10:54.673050 systemd[1]: Stopped kubelet.service. Dec 13 04:10:54.681121 systemd[1]: Starting kubelet.service... Dec 13 04:10:54.818052 update_engine[1134]: I1213 04:10:54.817979 1134 update_attempter.cc:509] Updating boot flags... Dec 13 04:10:55.154909 systemd[1]: Started kubelet.service. Dec 13 04:10:55.281818 kubelet[1453]: E1213 04:10:55.281767 1453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:10:55.288159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:10:55.288339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:10:55.884553 systemd[1]: Stopped kubelet.service. Dec 13 04:10:55.893006 systemd[1]: Starting kubelet.service... Dec 13 04:10:55.951634 systemd[1]: Reloading. Dec 13 04:10:56.062742 /usr/lib/systemd/system-generators/torcx-generator[1487]: time="2024-12-13T04:10:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:10:56.062775 /usr/lib/systemd/system-generators/torcx-generator[1487]: time="2024-12-13T04:10:56Z" level=info msg="torcx already run" Dec 13 04:10:56.369002 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:10:56.369159 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:10:56.392440 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:10:56.490850 systemd[1]: Started kubelet.service. Dec 13 04:10:56.494679 systemd[1]: Stopping kubelet.service... Dec 13 04:10:56.495119 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:10:56.495406 systemd[1]: Stopped kubelet.service. Dec 13 04:10:56.497079 systemd[1]: Starting kubelet.service... Dec 13 04:10:56.965680 systemd[1]: Started kubelet.service. Dec 13 04:10:57.070028 kubelet[1541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:10:57.070734 kubelet[1541]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:10:57.070734 kubelet[1541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:10:57.071035 kubelet[1541]: I1213 04:10:57.070886 1541 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:10:57.730971 kubelet[1541]: I1213 04:10:57.730916 1541 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 04:10:57.731171 kubelet[1541]: I1213 04:10:57.731160 1541 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:10:57.731607 kubelet[1541]: I1213 04:10:57.731592 1541 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 04:10:57.784758 kubelet[1541]: E1213 04:10:57.784631 1541 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:57.788637 kubelet[1541]: I1213 04:10:57.788582 1541 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:10:57.801900 kubelet[1541]: E1213 04:10:57.801811 1541 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 04:10:57.802120 kubelet[1541]: I1213 04:10:57.802092 1541 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 04:10:57.812695 kubelet[1541]: I1213 04:10:57.812659 1541 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:10:57.813079 kubelet[1541]: I1213 04:10:57.813051 1541 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 04:10:57.813612 kubelet[1541]: I1213 04:10:57.813546 1541 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:10:57.814160 kubelet[1541]: I1213 04:10:57.813766 1541 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-e-a81afd2c25.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 04:10:57.814582 kubelet[1541]: I1213 04:10:57.814552 1541 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:10:57.814743 kubelet[1541]: I1213 04:10:57.814721 1541 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 04:10:57.815087 kubelet[1541]: I1213 04:10:57.815058 1541 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:10:57.824663 kubelet[1541]: I1213 04:10:57.824625 1541 kubelet.go:408] "Attempting to sync node with API server" Dec 13 04:10:57.824855 kubelet[1541]: I1213 04:10:57.824829 1541 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:10:57.825063 kubelet[1541]: I1213 04:10:57.825039 1541 kubelet.go:314] "Adding apiserver pod source" Dec 13 04:10:57.825349 kubelet[1541]: I1213 04:10:57.825322 1541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:10:57.838296 kubelet[1541]: W1213 04:10:57.838189 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e-a81afd2c25.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:57.838296 kubelet[1541]: E1213 04:10:57.838275 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e-a81afd2c25.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:57.838590 kubelet[1541]: W1213 04:10:57.838559 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:57.838681 kubelet[1541]: E1213 04:10:57.838593 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:57.846404 kubelet[1541]: I1213 04:10:57.846264 1541 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:10:57.855832 kubelet[1541]: I1213 04:10:57.855784 1541 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:10:57.858020 kubelet[1541]: W1213 04:10:57.857960 1541 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:10:57.859319 kubelet[1541]: I1213 04:10:57.859284 1541 server.go:1269] "Started kubelet" Dec 13 04:10:57.870482 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 04:10:57.870625 kubelet[1541]: I1213 04:10:57.870558 1541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:10:57.872900 kubelet[1541]: E1213 04:10:57.868393 1541 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.188:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-e-a81afd2c25.novalocal.1810a129e5d1fd95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-e-a81afd2c25.novalocal,UID:ci-3510-3-6-e-a81afd2c25.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-e-a81afd2c25.novalocal,},FirstTimestamp:2024-12-13 04:10:57.859116437 +0000 UTC m=+0.881595812,LastTimestamp:2024-12-13 04:10:57.859116437 +0000 UTC m=+0.881595812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-e-a81afd2c25.novalocal,}" Dec 13 04:10:57.879783 kubelet[1541]: I1213 04:10:57.879739 1541 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 04:10:57.880059 kubelet[1541]: E1213 04:10:57.879990 1541 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-6-e-a81afd2c25.novalocal\" not found" Dec 13 04:10:57.880504 kubelet[1541]: I1213 04:10:57.880465 1541 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 04:10:57.880504 kubelet[1541]: I1213 04:10:57.880511 1541 reconciler.go:26] "Reconciler: start to sync state" Dec 13 04:10:57.880909 kubelet[1541]: I1213 04:10:57.880860 1541 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 04:10:57.881477 kubelet[1541]: I1213 04:10:57.881428 1541 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:10:57.883068 kubelet[1541]: W1213 04:10:57.882976 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:57.883367 kubelet[1541]: E1213 04:10:57.883314 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:57.883577 kubelet[1541]: I1213 04:10:57.883143 1541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:10:57.884123 kubelet[1541]: I1213 04:10:57.884089 1541 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:10:57.884361 kubelet[1541]: I1213 04:10:57.884326 1541 server.go:460] "Adding debug handlers to kubelet server" Dec 13 04:10:57.884766 kubelet[1541]: E1213 04:10:57.884708 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e-a81afd2c25.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="200ms" Dec 13 04:10:57.885575 kubelet[1541]: I1213 04:10:57.885538 1541 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:10:57.885722 kubelet[1541]: I1213 04:10:57.885623 1541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:10:57.885806 kubelet[1541]: E1213 04:10:57.885787 1541 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:10:57.887382 kubelet[1541]: I1213 04:10:57.887186 1541 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:10:57.918526 kubelet[1541]: I1213 04:10:57.918423 1541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:10:57.921509 kubelet[1541]: I1213 04:10:57.921472 1541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:10:57.921509 kubelet[1541]: I1213 04:10:57.921509 1541 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:10:57.921706 kubelet[1541]: I1213 04:10:57.921538 1541 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 04:10:57.921706 kubelet[1541]: E1213 04:10:57.921589 1541 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 04:10:57.929382 kubelet[1541]: I1213 04:10:57.929356 1541 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:10:57.929449 kubelet[1541]: I1213 04:10:57.929390 1541 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:10:57.929449 kubelet[1541]: I1213 04:10:57.929406 1541 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:10:57.930096 kubelet[1541]: W1213 04:10:57.929950 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:57.930096 kubelet[1541]: E1213 04:10:57.930007 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:57.935064 kubelet[1541]: I1213 04:10:57.935037 1541 policy_none.go:49] "None policy: Start" Dec 13 04:10:57.935828 kubelet[1541]: I1213 04:10:57.935809 1541 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:10:57.935891 kubelet[1541]: I1213 04:10:57.935832 1541 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:10:57.941361 systemd[1]: Created slice kubepods.slice. Dec 13 04:10:57.946451 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 04:10:57.954816 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 04:10:57.956751 kubelet[1541]: I1213 04:10:57.956729 1541 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:10:57.957022 kubelet[1541]: I1213 04:10:57.957010 1541 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 04:10:57.957117 kubelet[1541]: I1213 04:10:57.957083 1541 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 04:10:57.957762 kubelet[1541]: I1213 04:10:57.957748 1541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:10:57.959589 kubelet[1541]: E1213 04:10:57.959541 1541 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-e-a81afd2c25.novalocal\" not found" Dec 13 04:10:58.042090 systemd[1]: Created slice kubepods-burstable-pod58ed6293f9132df0f68d3125d4785a2e.slice. Dec 13 04:10:58.060653 systemd[1]: Created slice kubepods-burstable-podf0be13ee6a0740bec5c7bf28be3d786b.slice. Dec 13 04:10:58.067565 kubelet[1541]: I1213 04:10:58.067481 1541 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.069680 kubelet[1541]: E1213 04:10:58.069607 1541 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.076200 systemd[1]: Created slice kubepods-burstable-pod9b83bc2fc646f5ad5ad8df294df84f5e.slice. Dec 13 04:10:58.086188 kubelet[1541]: E1213 04:10:58.086058 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e-a81afd2c25.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="400ms" Dec 13 04:10:58.181827 kubelet[1541]: I1213 04:10:58.181765 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0be13ee6a0740bec5c7bf28be3d786b-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"f0be13ee6a0740bec5c7bf28be3d786b\") " pod="kube-system/kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.182129 kubelet[1541]: I1213 04:10:58.182088 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b83bc2fc646f5ad5ad8df294df84f5e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"9b83bc2fc646f5ad5ad8df294df84f5e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.182420 kubelet[1541]: I1213 04:10:58.182383 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.182660 kubelet[1541]: I1213 04:10:58.182617 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.182881 kubelet[1541]: I1213 04:10:58.182845 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b83bc2fc646f5ad5ad8df294df84f5e-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"9b83bc2fc646f5ad5ad8df294df84f5e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.183146 kubelet[1541]: I1213 04:10:58.183104 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b83bc2fc646f5ad5ad8df294df84f5e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"9b83bc2fc646f5ad5ad8df294df84f5e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.183412 kubelet[1541]: I1213 04:10:58.183375 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.183631 kubelet[1541]: I1213 04:10:58.183592 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.183845 kubelet[1541]: I1213 04:10:58.183809 1541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.274131 kubelet[1541]: I1213 04:10:58.274095 1541 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.275195 kubelet[1541]: E1213 04:10:58.275146 1541 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.360636 env[1138]: time="2024-12-13T04:10:58.357701522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal,Uid:58ed6293f9132df0f68d3125d4785a2e,Namespace:kube-system,Attempt:0,}" Dec 13 04:10:58.371117 env[1138]: time="2024-12-13T04:10:58.371023885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal,Uid:f0be13ee6a0740bec5c7bf28be3d786b,Namespace:kube-system,Attempt:0,}" Dec 13 04:10:58.382353 env[1138]: time="2024-12-13T04:10:58.382287617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal,Uid:9b83bc2fc646f5ad5ad8df294df84f5e,Namespace:kube-system,Attempt:0,}" Dec 13 04:10:58.487711 kubelet[1541]: E1213 04:10:58.487610 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e-a81afd2c25.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="800ms" Dec 13 04:10:58.678696 kubelet[1541]: I1213 04:10:58.678298 1541 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.679028 kubelet[1541]: E1213 04:10:58.678981 1541 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:58.733568 kubelet[1541]: W1213 04:10:58.733450 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e-a81afd2c25.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:58.733736 kubelet[1541]: E1213 04:10:58.733593 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-e-a81afd2c25.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:58.943131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881901057.mount: Deactivated successfully. Dec 13 04:10:58.961227 env[1138]: time="2024-12-13T04:10:58.961100823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:58.968722 env[1138]: time="2024-12-13T04:10:58.968650779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:58.983838 env[1138]: time="2024-12-13T04:10:58.983771049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:58.987069 env[1138]: time="2024-12-13T04:10:58.986992567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:58.989743 env[1138]: time="2024-12-13T04:10:58.989689906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:58.991932 env[1138]: time="2024-12-13T04:10:58.991880351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.000695 env[1138]: time="2024-12-13T04:10:59.000602329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.009113 env[1138]: time="2024-12-13T04:10:59.009033923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.012669 env[1138]: time="2024-12-13T04:10:59.012591116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.018575 env[1138]: time="2024-12-13T04:10:59.018507501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.020772 env[1138]: time="2024-12-13T04:10:59.020719312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.021107 kubelet[1541]: W1213 04:10:59.021028 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:59.021272 kubelet[1541]: E1213 04:10:59.021134 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:59.023107 env[1138]: time="2024-12-13T04:10:59.023031896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:10:59.096607 env[1138]: time="2024-12-13T04:10:59.096534143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:10:59.096791 env[1138]: time="2024-12-13T04:10:59.096584449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:10:59.096791 env[1138]: time="2024-12-13T04:10:59.096599868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:10:59.097059 env[1138]: time="2024-12-13T04:10:59.097017804Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fd104392d4f45d01b60d9534cdbddaa9ee54d9927e9033dc277c11c107dc860 pid=1594 runtime=io.containerd.runc.v2 Dec 13 04:10:59.097261 kubelet[1541]: W1213 04:10:59.097225 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:59.097526 kubelet[1541]: E1213 04:10:59.097274 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:59.100177 env[1138]: time="2024-12-13T04:10:59.100007297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:10:59.100177 env[1138]: time="2024-12-13T04:10:59.100046511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:10:59.100177 env[1138]: time="2024-12-13T04:10:59.100060748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:10:59.100432 env[1138]: time="2024-12-13T04:10:59.100390486Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/616940fac0add4cd59694db9941dabbab188dace071e82f1ac1c25fef8b2075c pid=1596 runtime=io.containerd.runc.v2 Dec 13 04:10:59.100763 env[1138]: time="2024-12-13T04:10:59.100712840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:10:59.100822 env[1138]: time="2024-12-13T04:10:59.100755341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:10:59.100822 env[1138]: time="2024-12-13T04:10:59.100769377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:10:59.100992 env[1138]: time="2024-12-13T04:10:59.100957616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5ad90270430060c89c583154dbc145756caace81ca50545a489ec115e7102be pid=1595 runtime=io.containerd.runc.v2 Dec 13 04:10:59.128073 systemd[1]: Started cri-containerd-9fd104392d4f45d01b60d9534cdbddaa9ee54d9927e9033dc277c11c107dc860.scope. Dec 13 04:10:59.148024 systemd[1]: Started cri-containerd-616940fac0add4cd59694db9941dabbab188dace071e82f1ac1c25fef8b2075c.scope. Dec 13 04:10:59.156447 systemd[1]: Started cri-containerd-e5ad90270430060c89c583154dbc145756caace81ca50545a489ec115e7102be.scope. Dec 13 04:10:59.232403 env[1138]: time="2024-12-13T04:10:59.232279031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal,Uid:9b83bc2fc646f5ad5ad8df294df84f5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd104392d4f45d01b60d9534cdbddaa9ee54d9927e9033dc277c11c107dc860\"" Dec 13 04:10:59.238247 env[1138]: time="2024-12-13T04:10:59.234045305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal,Uid:58ed6293f9132df0f68d3125d4785a2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5ad90270430060c89c583154dbc145756caace81ca50545a489ec115e7102be\"" Dec 13 04:10:59.238812 env[1138]: time="2024-12-13T04:10:59.238779128Z" level=info msg="CreateContainer within sandbox \"e5ad90270430060c89c583154dbc145756caace81ca50545a489ec115e7102be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 04:10:59.239541 env[1138]: time="2024-12-13T04:10:59.239194728Z" level=info msg="CreateContainer within sandbox \"9fd104392d4f45d01b60d9534cdbddaa9ee54d9927e9033dc277c11c107dc860\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 04:10:59.260155 env[1138]: time="2024-12-13T04:10:59.260091747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal,Uid:f0be13ee6a0740bec5c7bf28be3d786b,Namespace:kube-system,Attempt:0,} returns sandbox id \"616940fac0add4cd59694db9941dabbab188dace071e82f1ac1c25fef8b2075c\"" Dec 13 04:10:59.268080 env[1138]: time="2024-12-13T04:10:59.268042034Z" level=info msg="CreateContainer within sandbox \"616940fac0add4cd59694db9941dabbab188dace071e82f1ac1c25fef8b2075c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 04:10:59.289094 kubelet[1541]: E1213 04:10:59.288970 1541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-e-a81afd2c25.novalocal?timeout=10s\": dial tcp 172.24.4.188:6443: connect: connection refused" interval="1.6s" Dec 13 04:10:59.299517 env[1138]: time="2024-12-13T04:10:59.299452149Z" level=info msg="CreateContainer within sandbox \"9fd104392d4f45d01b60d9534cdbddaa9ee54d9927e9033dc277c11c107dc860\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d6815523a3eaff9731348a6aca50eb0292c7310758c527456f0dd76c5ab16b3\"" Dec 13 04:10:59.300449 env[1138]: time="2024-12-13T04:10:59.300415943Z" level=info msg="StartContainer for \"2d6815523a3eaff9731348a6aca50eb0292c7310758c527456f0dd76c5ab16b3\"" Dec 13 04:10:59.315719 env[1138]: time="2024-12-13T04:10:59.315656311Z" level=info msg="CreateContainer within sandbox \"e5ad90270430060c89c583154dbc145756caace81ca50545a489ec115e7102be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c0d4d0d2c6c7f26c1c86d0b02c39b4fa71b5d772b5f5fc4336dcbc33867ad6b\"" Dec 13 04:10:59.316512 env[1138]: time="2024-12-13T04:10:59.316473668Z" level=info msg="StartContainer for \"5c0d4d0d2c6c7f26c1c86d0b02c39b4fa71b5d772b5f5fc4336dcbc33867ad6b\"" Dec 13 04:10:59.322068 env[1138]: time="2024-12-13T04:10:59.322013075Z" level=info msg="CreateContainer within sandbox \"616940fac0add4cd59694db9941dabbab188dace071e82f1ac1c25fef8b2075c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6a98ee2c504b7db03039ba0a45d26928aa6aa7d002fa44435aef6083ff1b6644\"" Dec 13 04:10:59.322593 env[1138]: time="2024-12-13T04:10:59.322542012Z" level=info msg="StartContainer for \"6a98ee2c504b7db03039ba0a45d26928aa6aa7d002fa44435aef6083ff1b6644\"" Dec 13 04:10:59.325488 systemd[1]: Started cri-containerd-2d6815523a3eaff9731348a6aca50eb0292c7310758c527456f0dd76c5ab16b3.scope. Dec 13 04:10:59.359065 systemd[1]: Started cri-containerd-6a98ee2c504b7db03039ba0a45d26928aa6aa7d002fa44435aef6083ff1b6644.scope. Dec 13 04:10:59.378179 systemd[1]: Started cri-containerd-5c0d4d0d2c6c7f26c1c86d0b02c39b4fa71b5d772b5f5fc4336dcbc33867ad6b.scope. Dec 13 04:10:59.397112 kubelet[1541]: W1213 04:10:59.396971 1541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Dec 13 04:10:59.397112 kubelet[1541]: E1213 04:10:59.397066 1541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.188:6443: connect: connection refused" logger="UnhandledError" Dec 13 04:10:59.404485 env[1138]: time="2024-12-13T04:10:59.404415489Z" level=info msg="StartContainer for \"2d6815523a3eaff9731348a6aca50eb0292c7310758c527456f0dd76c5ab16b3\" returns successfully" Dec 13 04:10:59.459788 env[1138]: time="2024-12-13T04:10:59.459709338Z" level=info msg="StartContainer for \"6a98ee2c504b7db03039ba0a45d26928aa6aa7d002fa44435aef6083ff1b6644\" returns successfully" Dec 13 04:10:59.469944 env[1138]: time="2024-12-13T04:10:59.469889691Z" level=info msg="StartContainer for \"5c0d4d0d2c6c7f26c1c86d0b02c39b4fa71b5d772b5f5fc4336dcbc33867ad6b\" returns successfully" Dec 13 04:10:59.482281 kubelet[1541]: I1213 04:10:59.482220 1541 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:59.482933 kubelet[1541]: E1213 04:10:59.482815 1541 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:10:59.683116 kubelet[1541]: E1213 04:10:59.682967 1541 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.188:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-e-a81afd2c25.novalocal.1810a129e5d1fd95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-e-a81afd2c25.novalocal,UID:ci-3510-3-6-e-a81afd2c25.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-e-a81afd2c25.novalocal,},FirstTimestamp:2024-12-13 04:10:57.859116437 +0000 UTC m=+0.881595812,LastTimestamp:2024-12-13 04:10:57.859116437 +0000 UTC m=+0.881595812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-e-a81afd2c25.novalocal,}" Dec 13 04:11:01.084965 kubelet[1541]: I1213 04:11:01.084923 1541 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:01.609299 kubelet[1541]: E1213 04:11:01.609250 1541 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-e-a81afd2c25.novalocal\" not found" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:01.656042 kubelet[1541]: I1213 04:11:01.655995 1541 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:01.829493 kubelet[1541]: I1213 04:11:01.829461 1541 apiserver.go:52] "Watching apiserver" Dec 13 04:11:01.881029 kubelet[1541]: I1213 04:11:01.880948 1541 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 04:11:01.958125 kubelet[1541]: E1213 04:11:01.958053 1541 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:03.546748 kubelet[1541]: W1213 04:11:03.546673 1541 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:11:04.213806 systemd[1]: Reloading. Dec 13 04:11:04.363704 /usr/lib/systemd/system-generators/torcx-generator[1831]: time="2024-12-13T04:11:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:11:04.363741 /usr/lib/systemd/system-generators/torcx-generator[1831]: time="2024-12-13T04:11:04Z" level=info msg="torcx already run" Dec 13 04:11:04.448408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:11:04.448431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:11:04.471075 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:11:04.613770 kubelet[1541]: I1213 04:11:04.613723 1541 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:11:04.619468 systemd[1]: Stopping kubelet.service... Dec 13 04:11:04.635560 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:11:04.635947 systemd[1]: Stopped kubelet.service. Dec 13 04:11:04.636037 systemd[1]: kubelet.service: Consumed 1.289s CPU time. Dec 13 04:11:04.640270 systemd[1]: Starting kubelet.service... Dec 13 04:11:07.025768 systemd[1]: Started kubelet.service. Dec 13 04:11:07.152840 kubelet[1882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:11:07.152840 kubelet[1882]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:11:07.152840 kubelet[1882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:11:07.152840 kubelet[1882]: I1213 04:11:07.152029 1882 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:11:07.160853 kubelet[1882]: I1213 04:11:07.160810 1882 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 04:11:07.160853 kubelet[1882]: I1213 04:11:07.160844 1882 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:11:07.161170 kubelet[1882]: I1213 04:11:07.161142 1882 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 04:11:07.164650 kubelet[1882]: I1213 04:11:07.164613 1882 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 04:11:07.172063 kubelet[1882]: I1213 04:11:07.172032 1882 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:11:07.176883 kubelet[1882]: E1213 04:11:07.176808 1882 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 04:11:07.176883 kubelet[1882]: I1213 04:11:07.176884 1882 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 04:11:07.180792 kubelet[1882]: I1213 04:11:07.180768 1882 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:11:07.180919 kubelet[1882]: I1213 04:11:07.180872 1882 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 04:11:07.180993 kubelet[1882]: I1213 04:11:07.180961 1882 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:11:07.181796 kubelet[1882]: I1213 04:11:07.180992 1882 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-e-a81afd2c25.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 04:11:07.182035 kubelet[1882]: I1213 04:11:07.182021 1882 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:11:07.182102 kubelet[1882]: I1213 04:11:07.182094 1882 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 04:11:07.182194 kubelet[1882]: I1213 04:11:07.182182 1882 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:11:07.182380 kubelet[1882]: I1213 04:11:07.182369 1882 kubelet.go:408] "Attempting to sync node with API server" Dec 13 04:11:07.182453 kubelet[1882]: I1213 04:11:07.182443 1882 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:11:07.182535 kubelet[1882]: I1213 04:11:07.182526 1882 kubelet.go:314] "Adding apiserver pod source" Dec 13 04:11:07.182604 kubelet[1882]: I1213 04:11:07.182595 1882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:11:07.189599 kubelet[1882]: I1213 04:11:07.188512 1882 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:11:07.189599 kubelet[1882]: I1213 04:11:07.188987 1882 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:11:07.189599 kubelet[1882]: I1213 04:11:07.189456 1882 server.go:1269] "Started kubelet" Dec 13 04:11:07.193777 kubelet[1882]: I1213 04:11:07.191545 1882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:11:07.199635 kubelet[1882]: I1213 04:11:07.198942 1882 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:11:07.200824 kubelet[1882]: I1213 04:11:07.200269 1882 server.go:460] "Adding debug handlers to kubelet server" Dec 13 04:11:07.200452 sudo[1896]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 04:11:07.200685 sudo[1896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 04:11:07.201897 kubelet[1882]: I1213 04:11:07.201772 1882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:11:07.202850 kubelet[1882]: I1213 04:11:07.202731 1882 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:11:07.203673 kubelet[1882]: I1213 04:11:07.203647 1882 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 04:11:07.205053 kubelet[1882]: I1213 04:11:07.205026 1882 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 04:11:07.205860 kubelet[1882]: E1213 04:11:07.205829 1882 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-6-e-a81afd2c25.novalocal\" not found" Dec 13 04:11:07.212404 kubelet[1882]: I1213 04:11:07.212359 1882 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 04:11:07.212526 kubelet[1882]: I1213 04:11:07.212507 1882 reconciler.go:26] "Reconciler: start to sync state" Dec 13 04:11:07.222055 kubelet[1882]: I1213 04:11:07.221646 1882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:11:07.222908 kubelet[1882]: I1213 04:11:07.222884 1882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:11:07.222985 kubelet[1882]: I1213 04:11:07.222920 1882 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:11:07.222985 kubelet[1882]: I1213 04:11:07.222944 1882 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 04:11:07.223048 kubelet[1882]: E1213 04:11:07.222987 1882 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 04:11:07.230233 kubelet[1882]: I1213 04:11:07.225129 1882 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:11:07.230473 kubelet[1882]: I1213 04:11:07.230450 1882 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:11:07.242403 kubelet[1882]: E1213 04:11:07.242377 1882 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:11:07.242984 kubelet[1882]: I1213 04:11:07.242970 1882 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:11:07.318769 kubelet[1882]: I1213 04:11:07.318689 1882 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:11:07.318769 kubelet[1882]: I1213 04:11:07.318711 1882 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:11:07.318769 kubelet[1882]: I1213 04:11:07.318730 1882 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:11:07.318938 kubelet[1882]: I1213 04:11:07.318898 1882 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 04:11:07.318938 kubelet[1882]: I1213 04:11:07.318910 1882 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 04:11:07.318938 kubelet[1882]: I1213 04:11:07.318932 1882 policy_none.go:49] "None policy: Start" Dec 13 04:11:07.320308 kubelet[1882]: I1213 04:11:07.320273 1882 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:11:07.320308 kubelet[1882]: I1213 04:11:07.320298 1882 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:11:07.320566 kubelet[1882]: I1213 04:11:07.320539 1882 state_mem.go:75] "Updated machine memory state" Dec 13 04:11:07.323119 kubelet[1882]: E1213 04:11:07.323039 1882 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 04:11:07.327243 kubelet[1882]: I1213 04:11:07.326875 1882 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:11:07.327243 kubelet[1882]: I1213 04:11:07.327046 1882 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 04:11:07.327243 kubelet[1882]: I1213 04:11:07.327058 1882 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 04:11:07.327916 kubelet[1882]: I1213 04:11:07.327896 1882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:11:07.446019 kubelet[1882]: I1213 04:11:07.445980 1882 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.549342 kubelet[1882]: I1213 04:11:07.549273 1882 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.549901 kubelet[1882]: I1213 04:11:07.549848 1882 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.560410 kubelet[1882]: W1213 04:11:07.560369 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:11:07.560793 kubelet[1882]: E1213 04:11:07.560756 1882 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.562122 kubelet[1882]: W1213 04:11:07.562095 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:11:07.592673 kubelet[1882]: W1213 04:11:07.592500 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:11:07.615544 kubelet[1882]: I1213 04:11:07.615506 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b83bc2fc646f5ad5ad8df294df84f5e-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"9b83bc2fc646f5ad5ad8df294df84f5e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615694 kubelet[1882]: I1213 04:11:07.615568 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615694 kubelet[1882]: I1213 04:11:07.615600 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615694 kubelet[1882]: I1213 04:11:07.615642 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0be13ee6a0740bec5c7bf28be3d786b-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"f0be13ee6a0740bec5c7bf28be3d786b\") " pod="kube-system/kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615694 kubelet[1882]: I1213 04:11:07.615664 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b83bc2fc646f5ad5ad8df294df84f5e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"9b83bc2fc646f5ad5ad8df294df84f5e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615824 kubelet[1882]: I1213 04:11:07.615682 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b83bc2fc646f5ad5ad8df294df84f5e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"9b83bc2fc646f5ad5ad8df294df84f5e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615824 kubelet[1882]: I1213 04:11:07.615719 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615824 kubelet[1882]: I1213 04:11:07.615742 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:07.615824 kubelet[1882]: I1213 04:11:07.615761 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58ed6293f9132df0f68d3125d4785a2e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal\" (UID: \"58ed6293f9132df0f68d3125d4785a2e\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:08.205554 kubelet[1882]: I1213 04:11:08.205485 1882 apiserver.go:52] "Watching apiserver" Dec 13 04:11:08.213976 kubelet[1882]: I1213 04:11:08.213939 1882 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 04:11:08.313712 kubelet[1882]: W1213 04:11:08.312963 1882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:11:08.313712 kubelet[1882]: E1213 04:11:08.313104 1882 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" Dec 13 04:11:08.388148 kubelet[1882]: I1213 04:11:08.388050 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-e-a81afd2c25.novalocal" podStartSLOduration=1.38802881 podStartE2EDuration="1.38802881s" podCreationTimestamp="2024-12-13 04:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:11:08.371895159 +0000 UTC m=+1.302373990" watchObservedRunningTime="2024-12-13 04:11:08.38802881 +0000 UTC m=+1.318507631" Dec 13 04:11:08.388521 kubelet[1882]: I1213 04:11:08.388495 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-e-a81afd2c25.novalocal" podStartSLOduration=1.38847225 podStartE2EDuration="1.38847225s" podCreationTimestamp="2024-12-13 04:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:11:08.386405864 +0000 UTC m=+1.316884685" watchObservedRunningTime="2024-12-13 04:11:08.38847225 +0000 UTC m=+1.318951071" Dec 13 04:11:08.413258 kubelet[1882]: I1213 04:11:08.413179 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-e-a81afd2c25.novalocal" podStartSLOduration=5.413157625 podStartE2EDuration="5.413157625s" podCreationTimestamp="2024-12-13 04:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:11:08.399382664 +0000 UTC m=+1.329861495" watchObservedRunningTime="2024-12-13 04:11:08.413157625 +0000 UTC m=+1.343636446" Dec 13 04:11:08.505896 sudo[1896]: pam_unix(sudo:session): session closed for user root Dec 13 04:11:10.443917 kubelet[1882]: I1213 04:11:10.443888 1882 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 04:11:10.444881 env[1138]: time="2024-12-13T04:11:10.444781012Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:11:10.445385 kubelet[1882]: I1213 04:11:10.445374 1882 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 04:11:11.014361 sudo[1264]: pam_unix(sudo:session): session closed for user root Dec 13 04:11:11.184423 systemd[1]: Created slice kubepods-burstable-pod4fdd07bf_7e56_4aec_89fd_b5aaa373b126.slice. Dec 13 04:11:11.191450 systemd[1]: Created slice kubepods-besteffort-pod6911d2b1_4b86_49c1_89a1_4ee9d286a288.slice. Dec 13 04:11:11.193099 sshd[1261]: pam_unix(sshd:session): session closed for user core Dec 13 04:11:11.197997 systemd[1]: sshd@4-172.24.4.188:22-172.24.4.1:39650.service: Deactivated successfully. Dec 13 04:11:11.198735 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:11:11.198877 systemd[1]: session-5.scope: Consumed 7.516s CPU time. Dec 13 04:11:11.199787 systemd-logind[1133]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:11:11.201076 systemd-logind[1133]: Removed session 5. Dec 13 04:11:11.228447 kubelet[1882]: E1213 04:11:11.228367 1882 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-z5hmq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-z5hmq lib-modules xtables-lock]: context canceled" pod="kube-system/cilium-xg9j5" podUID="4fdd07bf-7e56-4aec-89fd-b5aaa373b126" Dec 13 04:11:11.239022 kubelet[1882]: I1213 04:11:11.238971 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-clustermesh-secrets\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239022 kubelet[1882]: I1213 04:11:11.239016 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-config-path\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239293 kubelet[1882]: I1213 04:11:11.239052 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-net\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239293 kubelet[1882]: I1213 04:11:11.239073 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-bpf-maps\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239293 kubelet[1882]: I1213 04:11:11.239091 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cni-path\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239293 kubelet[1882]: I1213 04:11:11.239126 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-cgroup\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239293 kubelet[1882]: I1213 04:11:11.239147 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6911d2b1-4b86-49c1-89a1-4ee9d286a288-kube-proxy\") pod \"kube-proxy-zsc5s\" (UID: \"6911d2b1-4b86-49c1-89a1-4ee9d286a288\") " pod="kube-system/kube-proxy-zsc5s" Dec 13 04:11:11.239293 kubelet[1882]: I1213 04:11:11.239164 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6911d2b1-4b86-49c1-89a1-4ee9d286a288-xtables-lock\") pod \"kube-proxy-zsc5s\" (UID: \"6911d2b1-4b86-49c1-89a1-4ee9d286a288\") " pod="kube-system/kube-proxy-zsc5s" Dec 13 04:11:11.239476 kubelet[1882]: I1213 04:11:11.239181 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-etc-cni-netd\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239476 kubelet[1882]: I1213 04:11:11.239242 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxmx\" (UniqueName: \"kubernetes.io/projected/6911d2b1-4b86-49c1-89a1-4ee9d286a288-kube-api-access-bxxmx\") pod \"kube-proxy-zsc5s\" (UID: \"6911d2b1-4b86-49c1-89a1-4ee9d286a288\") " pod="kube-system/kube-proxy-zsc5s" Dec 13 04:11:11.239476 kubelet[1882]: I1213 04:11:11.239263 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hubble-tls\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239476 kubelet[1882]: I1213 04:11:11.239281 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-kernel\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239476 kubelet[1882]: I1213 04:11:11.239319 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hmq\" (UniqueName: \"kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-kube-api-access-z5hmq\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239620 kubelet[1882]: I1213 04:11:11.239340 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-xtables-lock\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239620 kubelet[1882]: I1213 04:11:11.239359 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-run\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239620 kubelet[1882]: I1213 04:11:11.239403 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-lib-modules\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239620 kubelet[1882]: I1213 04:11:11.239426 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hostproc\") pod \"cilium-xg9j5\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " pod="kube-system/cilium-xg9j5" Dec 13 04:11:11.239620 kubelet[1882]: I1213 04:11:11.239448 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6911d2b1-4b86-49c1-89a1-4ee9d286a288-lib-modules\") pod \"kube-proxy-zsc5s\" (UID: \"6911d2b1-4b86-49c1-89a1-4ee9d286a288\") " pod="kube-system/kube-proxy-zsc5s" Dec 13 04:11:11.340880 kubelet[1882]: I1213 04:11:11.340719 1882 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 04:11:11.443701 kubelet[1882]: I1213 04:11:11.443662 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-cgroup\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.443934 kubelet[1882]: I1213 04:11:11.443916 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-xtables-lock\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444025 kubelet[1882]: I1213 04:11:11.444011 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hostproc\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444448 kubelet[1882]: I1213 04:11:11.444435 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-clustermesh-secrets\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444538 kubelet[1882]: I1213 04:11:11.444525 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5hmq\" (UniqueName: \"kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-kube-api-access-z5hmq\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444613 kubelet[1882]: I1213 04:11:11.444601 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-run\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444689 kubelet[1882]: I1213 04:11:11.444677 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-etc-cni-netd\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444768 kubelet[1882]: I1213 04:11:11.444756 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-net\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.444844 kubelet[1882]: I1213 04:11:11.444832 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-kernel\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.445015 kubelet[1882]: I1213 04:11:11.445003 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-config-path\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.445092 kubelet[1882]: I1213 04:11:11.445079 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-bpf-maps\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.445171 kubelet[1882]: I1213 04:11:11.445158 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hubble-tls\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.445267 kubelet[1882]: I1213 04:11:11.445254 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cni-path\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.445366 kubelet[1882]: I1213 04:11:11.445350 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-lib-modules\") pod \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\" (UID: \"4fdd07bf-7e56-4aec-89fd-b5aaa373b126\") " Dec 13 04:11:11.447348 kubelet[1882]: I1213 04:11:11.444374 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hostproc" (OuterVolumeSpecName: "hostproc") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447448 kubelet[1882]: I1213 04:11:11.444393 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447511 kubelet[1882]: I1213 04:11:11.445009 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447570 kubelet[1882]: I1213 04:11:11.445052 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447625 kubelet[1882]: I1213 04:11:11.445450 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447680 kubelet[1882]: I1213 04:11:11.445465 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447739 kubelet[1882]: I1213 04:11:11.445475 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447794 kubelet[1882]: I1213 04:11:11.445486 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.447849 kubelet[1882]: I1213 04:11:11.447328 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:11:11.447926 kubelet[1882]: I1213 04:11:11.447912 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.448960 kubelet[1882]: I1213 04:11:11.448926 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cni-path" (OuterVolumeSpecName: "cni-path") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:11:11.452351 systemd[1]: var-lib-kubelet-pods-4fdd07bf\x2d7e56\x2d4aec\x2d89fd\x2db5aaa373b126-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:11:11.453734 kubelet[1882]: I1213 04:11:11.453704 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:11:11.454241 kubelet[1882]: I1213 04:11:11.454170 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:11:11.455894 kubelet[1882]: I1213 04:11:11.455873 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-kube-api-access-z5hmq" (OuterVolumeSpecName: "kube-api-access-z5hmq") pod "4fdd07bf-7e56-4aec-89fd-b5aaa373b126" (UID: "4fdd07bf-7e56-4aec-89fd-b5aaa373b126"). InnerVolumeSpecName "kube-api-access-z5hmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:11:11.502604 env[1138]: time="2024-12-13T04:11:11.502505158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsc5s,Uid:6911d2b1-4b86-49c1-89a1-4ee9d286a288,Namespace:kube-system,Attempt:0,}" Dec 13 04:11:11.540635 env[1138]: time="2024-12-13T04:11:11.540470689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:11:11.541133 env[1138]: time="2024-12-13T04:11:11.540642655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:11:11.541133 env[1138]: time="2024-12-13T04:11:11.540727676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:11:11.542957 env[1138]: time="2024-12-13T04:11:11.542131934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c1274d32a25182ea900722937d647ebc705f0de01fe747df6bb7f117bebe8ec pid=1964 runtime=io.containerd.runc.v2 Dec 13 04:11:11.545814 kubelet[1882]: I1213 04:11:11.545757 1882 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-net\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.545814 kubelet[1882]: I1213 04:11:11.545789 1882 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-host-proc-sys-kernel\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.545814 kubelet[1882]: I1213 04:11:11.545802 1882 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-bpf-maps\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.545814 kubelet[1882]: I1213 04:11:11.545813 1882 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hubble-tls\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.545814 kubelet[1882]: I1213 04:11:11.545824 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-config-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.545814 kubelet[1882]: I1213 04:11:11.545835 1882 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cni-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545847 1882 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-lib-modules\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545860 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-cgroup\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545870 1882 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-xtables-lock\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545881 1882 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-hostproc\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545895 1882 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-clustermesh-secrets\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545905 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-cilium-run\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546479 kubelet[1882]: I1213 04:11:11.545915 1882 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-etc-cni-netd\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.546881 kubelet[1882]: I1213 04:11:11.545970 1882 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z5hmq\" (UniqueName: \"kubernetes.io/projected/4fdd07bf-7e56-4aec-89fd-b5aaa373b126-kube-api-access-z5hmq\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:11:11.571423 systemd[1]: Started cri-containerd-6c1274d32a25182ea900722937d647ebc705f0de01fe747df6bb7f117bebe8ec.scope. Dec 13 04:11:11.609949 systemd[1]: Created slice kubepods-besteffort-pod82ed48ee_ea06_445c_9d0e_5263df564047.slice. Dec 13 04:11:11.633771 env[1138]: time="2024-12-13T04:11:11.633704224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsc5s,Uid:6911d2b1-4b86-49c1-89a1-4ee9d286a288,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1274d32a25182ea900722937d647ebc705f0de01fe747df6bb7f117bebe8ec\"" Dec 13 04:11:11.637536 env[1138]: time="2024-12-13T04:11:11.637489118Z" level=info msg="CreateContainer within sandbox \"6c1274d32a25182ea900722937d647ebc705f0de01fe747df6bb7f117bebe8ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:11:11.647644 kubelet[1882]: I1213 04:11:11.646649 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsjc\" (UniqueName: \"kubernetes.io/projected/82ed48ee-ea06-445c-9d0e-5263df564047-kube-api-access-crsjc\") pod \"cilium-operator-5d85765b45-2p8qj\" (UID: \"82ed48ee-ea06-445c-9d0e-5263df564047\") " pod="kube-system/cilium-operator-5d85765b45-2p8qj" Dec 13 04:11:11.647644 kubelet[1882]: I1213 04:11:11.646749 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ed48ee-ea06-445c-9d0e-5263df564047-cilium-config-path\") pod \"cilium-operator-5d85765b45-2p8qj\" (UID: \"82ed48ee-ea06-445c-9d0e-5263df564047\") " pod="kube-system/cilium-operator-5d85765b45-2p8qj" Dec 13 04:11:11.663128 env[1138]: time="2024-12-13T04:11:11.663031826Z" level=info msg="CreateContainer within sandbox \"6c1274d32a25182ea900722937d647ebc705f0de01fe747df6bb7f117bebe8ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4e2b7aaf123700a05869fa3637bb3466af34bd7deff97eebaa6cd0a1f6b761e\"" Dec 13 04:11:11.665939 env[1138]: time="2024-12-13T04:11:11.665888754Z" level=info msg="StartContainer for \"c4e2b7aaf123700a05869fa3637bb3466af34bd7deff97eebaa6cd0a1f6b761e\"" Dec 13 04:11:11.690265 systemd[1]: Started cri-containerd-c4e2b7aaf123700a05869fa3637bb3466af34bd7deff97eebaa6cd0a1f6b761e.scope. Dec 13 04:11:11.743040 env[1138]: time="2024-12-13T04:11:11.742943655Z" level=info msg="StartContainer for \"c4e2b7aaf123700a05869fa3637bb3466af34bd7deff97eebaa6cd0a1f6b761e\" returns successfully" Dec 13 04:11:11.913326 env[1138]: time="2024-12-13T04:11:11.913264431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2p8qj,Uid:82ed48ee-ea06-445c-9d0e-5263df564047,Namespace:kube-system,Attempt:0,}" Dec 13 04:11:11.956180 env[1138]: time="2024-12-13T04:11:11.956033496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:11:11.956180 env[1138]: time="2024-12-13T04:11:11.956101484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:11:11.956180 env[1138]: time="2024-12-13T04:11:11.956115511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:11:11.956670 env[1138]: time="2024-12-13T04:11:11.956425317Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8 pid=2041 runtime=io.containerd.runc.v2 Dec 13 04:11:11.989249 systemd[1]: Started cri-containerd-2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8.scope. Dec 13 04:11:12.049867 env[1138]: time="2024-12-13T04:11:12.049473662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2p8qj,Uid:82ed48ee-ea06-445c-9d0e-5263df564047,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8\"" Dec 13 04:11:12.053308 env[1138]: time="2024-12-13T04:11:12.051736613Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:11:12.307876 systemd[1]: Removed slice kubepods-burstable-pod4fdd07bf_7e56_4aec_89fd_b5aaa373b126.slice. Dec 13 04:11:12.324960 kubelet[1882]: I1213 04:11:12.324914 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zsc5s" podStartSLOduration=1.324896974 podStartE2EDuration="1.324896974s" podCreationTimestamp="2024-12-13 04:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:11:12.324260891 +0000 UTC m=+5.254739722" watchObservedRunningTime="2024-12-13 04:11:12.324896974 +0000 UTC m=+5.255375796" Dec 13 04:11:12.373084 systemd[1]: var-lib-kubelet-pods-4fdd07bf\x2d7e56\x2d4aec\x2d89fd\x2db5aaa373b126-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5hmq.mount: Deactivated successfully. Dec 13 04:11:12.373225 systemd[1]: var-lib-kubelet-pods-4fdd07bf\x2d7e56\x2d4aec\x2d89fd\x2db5aaa373b126-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:11:12.402108 systemd[1]: Created slice kubepods-burstable-pod3912dd89_8293_4ed1_a56c_e962a5601092.slice. Dec 13 04:11:12.452560 kubelet[1882]: I1213 04:11:12.452508 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-lib-modules\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452560 kubelet[1882]: I1213 04:11:12.452558 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3912dd89-8293-4ed1-a56c-e962a5601092-clustermesh-secrets\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452955 kubelet[1882]: I1213 04:11:12.452578 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-config-path\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452955 kubelet[1882]: I1213 04:11:12.452599 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-run\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452955 kubelet[1882]: I1213 04:11:12.452627 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-hubble-tls\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452955 kubelet[1882]: I1213 04:11:12.452645 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-etc-cni-netd\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452955 kubelet[1882]: I1213 04:11:12.452668 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrblz\" (UniqueName: \"kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-kube-api-access-zrblz\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.452955 kubelet[1882]: I1213 04:11:12.452689 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-bpf-maps\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.453154 kubelet[1882]: I1213 04:11:12.452713 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-xtables-lock\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.453154 kubelet[1882]: I1213 04:11:12.452732 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-net\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.453154 kubelet[1882]: I1213 04:11:12.452751 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cni-path\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.453154 kubelet[1882]: I1213 04:11:12.452772 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-hostproc\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.453154 kubelet[1882]: I1213 04:11:12.452797 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-cgroup\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.453154 kubelet[1882]: I1213 04:11:12.452819 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-kernel\") pod \"cilium-wlvwl\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " pod="kube-system/cilium-wlvwl" Dec 13 04:11:12.707571 env[1138]: time="2024-12-13T04:11:12.707118613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlvwl,Uid:3912dd89-8293-4ed1-a56c-e962a5601092,Namespace:kube-system,Attempt:0,}" Dec 13 04:11:12.730788 env[1138]: time="2024-12-13T04:11:12.730703250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:11:12.730942 env[1138]: time="2024-12-13T04:11:12.730801787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:11:12.730942 env[1138]: time="2024-12-13T04:11:12.730833967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:11:12.731192 env[1138]: time="2024-12-13T04:11:12.731154844Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b pid=2182 runtime=io.containerd.runc.v2 Dec 13 04:11:12.749312 systemd[1]: Started cri-containerd-db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b.scope. Dec 13 04:11:12.788098 env[1138]: time="2024-12-13T04:11:12.788035666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wlvwl,Uid:3912dd89-8293-4ed1-a56c-e962a5601092,Namespace:kube-system,Attempt:0,} returns sandbox id \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\"" Dec 13 04:11:13.228485 kubelet[1882]: I1213 04:11:13.228426 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fdd07bf-7e56-4aec-89fd-b5aaa373b126" path="/var/lib/kubelet/pods/4fdd07bf-7e56-4aec-89fd-b5aaa373b126/volumes" Dec 13 04:11:13.954899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215504848.mount: Deactivated successfully. Dec 13 04:11:15.210805 env[1138]: time="2024-12-13T04:11:15.210765365Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:11:15.214797 env[1138]: time="2024-12-13T04:11:15.214771229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:11:15.219505 env[1138]: time="2024-12-13T04:11:15.219394119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:11:15.220998 env[1138]: time="2024-12-13T04:11:15.220926726Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:11:15.228504 env[1138]: time="2024-12-13T04:11:15.228462403Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:11:15.230359 env[1138]: time="2024-12-13T04:11:15.230329995Z" level=info msg="CreateContainer within sandbox \"2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:11:15.247962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106232821.mount: Deactivated successfully. Dec 13 04:11:15.254060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185754070.mount: Deactivated successfully. Dec 13 04:11:15.263604 env[1138]: time="2024-12-13T04:11:15.263544271Z" level=info msg="CreateContainer within sandbox \"2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\"" Dec 13 04:11:15.266235 env[1138]: time="2024-12-13T04:11:15.266189092Z" level=info msg="StartContainer for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\"" Dec 13 04:11:15.302871 systemd[1]: Started cri-containerd-23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947.scope. Dec 13 04:11:15.436395 env[1138]: time="2024-12-13T04:11:15.436283239Z" level=info msg="StartContainer for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" returns successfully" Dec 13 04:11:16.994291 kubelet[1882]: I1213 04:11:16.994120 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2p8qj" podStartSLOduration=2.822442223 podStartE2EDuration="5.994019079s" podCreationTimestamp="2024-12-13 04:11:11 +0000 UTC" firstStartedPulling="2024-12-13 04:11:12.050980974 +0000 UTC m=+4.981459805" lastFinishedPulling="2024-12-13 04:11:15.2225578 +0000 UTC m=+8.153036661" observedRunningTime="2024-12-13 04:11:16.331863131 +0000 UTC m=+9.262341962" watchObservedRunningTime="2024-12-13 04:11:16.994019079 +0000 UTC m=+9.924497950" Dec 13 04:11:22.357627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455790717.mount: Deactivated successfully. Dec 13 04:11:27.160016 env[1138]: time="2024-12-13T04:11:27.159092474Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:11:27.166422 env[1138]: time="2024-12-13T04:11:27.166362211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:11:27.175304 env[1138]: time="2024-12-13T04:11:27.175248135Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:11:27.176735 env[1138]: time="2024-12-13T04:11:27.176684395Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:11:27.182954 env[1138]: time="2024-12-13T04:11:27.182880695Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:11:27.218807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720122362.mount: Deactivated successfully. Dec 13 04:11:27.236494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488251681.mount: Deactivated successfully. Dec 13 04:11:27.258409 env[1138]: time="2024-12-13T04:11:27.258300517Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\"" Dec 13 04:11:27.262882 env[1138]: time="2024-12-13T04:11:27.261909085Z" level=info msg="StartContainer for \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\"" Dec 13 04:11:27.289269 systemd[1]: Started cri-containerd-fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47.scope. Dec 13 04:11:27.599311 env[1138]: time="2024-12-13T04:11:27.599140187Z" level=info msg="StartContainer for \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\" returns successfully" Dec 13 04:11:27.623151 systemd[1]: cri-containerd-fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47.scope: Deactivated successfully. Dec 13 04:11:27.961857 env[1138]: time="2024-12-13T04:11:27.961752937Z" level=info msg="shim disconnected" id=fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47 Dec 13 04:11:27.962409 env[1138]: time="2024-12-13T04:11:27.962364371Z" level=warning msg="cleaning up after shim disconnected" id=fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47 namespace=k8s.io Dec 13 04:11:27.962605 env[1138]: time="2024-12-13T04:11:27.962569197Z" level=info msg="cleaning up dead shim" Dec 13 04:11:27.980700 env[1138]: time="2024-12-13T04:11:27.980646743Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:11:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2335 runtime=io.containerd.runc.v2\n" Dec 13 04:11:28.210269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47-rootfs.mount: Deactivated successfully. Dec 13 04:11:28.376135 env[1138]: time="2024-12-13T04:11:28.375772785Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:11:28.424055 env[1138]: time="2024-12-13T04:11:28.423585511Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\"" Dec 13 04:11:28.426809 env[1138]: time="2024-12-13T04:11:28.426745903Z" level=info msg="StartContainer for \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\"" Dec 13 04:11:28.470433 systemd[1]: Started cri-containerd-9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89.scope. Dec 13 04:11:28.517953 env[1138]: time="2024-12-13T04:11:28.516326974Z" level=info msg="StartContainer for \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\" returns successfully" Dec 13 04:11:28.529473 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:11:28.529925 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:11:28.530123 systemd[1]: Stopping systemd-sysctl.service... Dec 13 04:11:28.533451 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:11:28.537562 systemd[1]: cri-containerd-9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89.scope: Deactivated successfully. Dec 13 04:11:28.587636 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:11:28.594548 env[1138]: time="2024-12-13T04:11:28.594500906Z" level=info msg="shim disconnected" id=9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89 Dec 13 04:11:28.594825 env[1138]: time="2024-12-13T04:11:28.594787626Z" level=warning msg="cleaning up after shim disconnected" id=9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89 namespace=k8s.io Dec 13 04:11:28.594915 env[1138]: time="2024-12-13T04:11:28.594899958Z" level=info msg="cleaning up dead shim" Dec 13 04:11:28.604382 env[1138]: time="2024-12-13T04:11:28.604345787Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:11:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2401 runtime=io.containerd.runc.v2\n" Dec 13 04:11:29.209190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89-rootfs.mount: Deactivated successfully. Dec 13 04:11:29.387742 env[1138]: time="2024-12-13T04:11:29.386880045Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:11:29.435679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792714997.mount: Deactivated successfully. Dec 13 04:11:29.450686 env[1138]: time="2024-12-13T04:11:29.450645132Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\"" Dec 13 04:11:29.452231 env[1138]: time="2024-12-13T04:11:29.451553276Z" level=info msg="StartContainer for \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\"" Dec 13 04:11:29.487920 systemd[1]: Started cri-containerd-bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e.scope. Dec 13 04:11:29.537843 env[1138]: time="2024-12-13T04:11:29.537796682Z" level=info msg="StartContainer for \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\" returns successfully" Dec 13 04:11:29.549574 systemd[1]: cri-containerd-bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e.scope: Deactivated successfully. Dec 13 04:11:29.596858 env[1138]: time="2024-12-13T04:11:29.596787535Z" level=info msg="shim disconnected" id=bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e Dec 13 04:11:29.597137 env[1138]: time="2024-12-13T04:11:29.597117546Z" level=warning msg="cleaning up after shim disconnected" id=bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e namespace=k8s.io Dec 13 04:11:29.597268 env[1138]: time="2024-12-13T04:11:29.597251600Z" level=info msg="cleaning up dead shim" Dec 13 04:11:29.606661 env[1138]: time="2024-12-13T04:11:29.606573383Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:11:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2462 runtime=io.containerd.runc.v2\n" Dec 13 04:11:30.208996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e-rootfs.mount: Deactivated successfully. Dec 13 04:11:30.402422 env[1138]: time="2024-12-13T04:11:30.402194071Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:11:30.442568 env[1138]: time="2024-12-13T04:11:30.442456700Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\"" Dec 13 04:11:30.443913 env[1138]: time="2024-12-13T04:11:30.443844256Z" level=info msg="StartContainer for \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\"" Dec 13 04:11:30.498028 systemd[1]: Started cri-containerd-060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e.scope. Dec 13 04:11:30.534240 systemd[1]: cri-containerd-060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e.scope: Deactivated successfully. Dec 13 04:11:30.536933 env[1138]: time="2024-12-13T04:11:30.536730647Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3912dd89_8293_4ed1_a56c_e962a5601092.slice/cri-containerd-060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e.scope/memory.events\": no such file or directory" Dec 13 04:11:30.542455 env[1138]: time="2024-12-13T04:11:30.542390241Z" level=info msg="StartContainer for \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\" returns successfully" Dec 13 04:11:30.570840 env[1138]: time="2024-12-13T04:11:30.570764467Z" level=info msg="shim disconnected" id=060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e Dec 13 04:11:30.571199 env[1138]: time="2024-12-13T04:11:30.571140586Z" level=warning msg="cleaning up after shim disconnected" id=060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e namespace=k8s.io Dec 13 04:11:30.571366 env[1138]: time="2024-12-13T04:11:30.571338760Z" level=info msg="cleaning up dead shim" Dec 13 04:11:30.580776 env[1138]: time="2024-12-13T04:11:30.580743678Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:11:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2518 runtime=io.containerd.runc.v2\n" Dec 13 04:11:31.209231 systemd[1]: run-containerd-runc-k8s.io-060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e-runc.ixO7ib.mount: Deactivated successfully. Dec 13 04:11:31.209489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e-rootfs.mount: Deactivated successfully. Dec 13 04:11:31.407308 env[1138]: time="2024-12-13T04:11:31.407137392Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:11:31.446864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354106901.mount: Deactivated successfully. Dec 13 04:11:31.467242 env[1138]: time="2024-12-13T04:11:31.466787877Z" level=info msg="CreateContainer within sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\"" Dec 13 04:11:31.471309 env[1138]: time="2024-12-13T04:11:31.468535823Z" level=info msg="StartContainer for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\"" Dec 13 04:11:31.516485 systemd[1]: Started cri-containerd-3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f.scope. Dec 13 04:11:31.560423 env[1138]: time="2024-12-13T04:11:31.560365303Z" level=info msg="StartContainer for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" returns successfully" Dec 13 04:11:31.819069 kubelet[1882]: I1213 04:11:31.818403 1882 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 04:11:31.897892 kubelet[1882]: I1213 04:11:31.897475 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d695da8-4ed8-4efd-99d3-a36c648cc50a-config-volume\") pod \"coredns-6f6b679f8f-vklqj\" (UID: \"1d695da8-4ed8-4efd-99d3-a36c648cc50a\") " pod="kube-system/coredns-6f6b679f8f-vklqj" Dec 13 04:11:31.897892 kubelet[1882]: I1213 04:11:31.897644 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnpr\" (UniqueName: \"kubernetes.io/projected/1d695da8-4ed8-4efd-99d3-a36c648cc50a-kube-api-access-mqnpr\") pod \"coredns-6f6b679f8f-vklqj\" (UID: \"1d695da8-4ed8-4efd-99d3-a36c648cc50a\") " pod="kube-system/coredns-6f6b679f8f-vklqj" Dec 13 04:11:31.908730 systemd[1]: Created slice kubepods-burstable-pod1d695da8_4ed8_4efd_99d3_a36c648cc50a.slice. Dec 13 04:11:31.923849 systemd[1]: Created slice kubepods-burstable-pod4044f810_2186_4621_a2e7_46392b8cf71a.slice. Dec 13 04:11:31.929494 kubelet[1882]: W1213 04:11:31.929466 1882 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-6-e-a81afd2c25.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-e-a81afd2c25.novalocal' and this object Dec 13 04:11:31.929696 kubelet[1882]: E1213 04:11:31.929669 1882 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510-3-6-e-a81afd2c25.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-e-a81afd2c25.novalocal' and this object" logger="UnhandledError" Dec 13 04:11:32.099616 kubelet[1882]: I1213 04:11:32.099312 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4044f810-2186-4621-a2e7-46392b8cf71a-config-volume\") pod \"coredns-6f6b679f8f-rxd9z\" (UID: \"4044f810-2186-4621-a2e7-46392b8cf71a\") " pod="kube-system/coredns-6f6b679f8f-rxd9z" Dec 13 04:11:32.099616 kubelet[1882]: I1213 04:11:32.099504 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkgn\" (UniqueName: \"kubernetes.io/projected/4044f810-2186-4621-a2e7-46392b8cf71a-kube-api-access-tlkgn\") pod \"coredns-6f6b679f8f-rxd9z\" (UID: \"4044f810-2186-4621-a2e7-46392b8cf71a\") " pod="kube-system/coredns-6f6b679f8f-rxd9z" Dec 13 04:11:33.122068 env[1138]: time="2024-12-13T04:11:33.121509831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vklqj,Uid:1d695da8-4ed8-4efd-99d3-a36c648cc50a,Namespace:kube-system,Attempt:0,}" Dec 13 04:11:33.131074 env[1138]: time="2024-12-13T04:11:33.131027517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rxd9z,Uid:4044f810-2186-4621-a2e7-46392b8cf71a,Namespace:kube-system,Attempt:0,}" Dec 13 04:11:34.813671 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 04:11:34.814806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 04:11:34.814171 systemd-networkd[979]: cilium_host: Link UP Dec 13 04:11:34.814605 systemd-networkd[979]: cilium_net: Link UP Dec 13 04:11:34.814973 systemd-networkd[979]: cilium_net: Gained carrier Dec 13 04:11:34.815408 systemd-networkd[979]: cilium_host: Gained carrier Dec 13 04:11:34.950858 systemd-networkd[979]: cilium_vxlan: Link UP Dec 13 04:11:34.950867 systemd-networkd[979]: cilium_vxlan: Gained carrier Dec 13 04:11:34.975483 systemd-networkd[979]: cilium_net: Gained IPv6LL Dec 13 04:11:35.030653 systemd-networkd[979]: cilium_host: Gained IPv6LL Dec 13 04:11:35.844238 kernel: NET: Registered PF_ALG protocol family Dec 13 04:11:36.714582 systemd-networkd[979]: lxc_health: Link UP Dec 13 04:11:36.720367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:11:36.720360 systemd-networkd[979]: lxc_health: Gained carrier Dec 13 04:11:36.751650 kubelet[1882]: I1213 04:11:36.751548 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wlvwl" podStartSLOduration=10.36246489 podStartE2EDuration="24.751521978s" podCreationTimestamp="2024-12-13 04:11:12 +0000 UTC" firstStartedPulling="2024-12-13 04:11:12.790109039 +0000 UTC m=+5.720587870" lastFinishedPulling="2024-12-13 04:11:27.179166127 +0000 UTC m=+20.109644958" observedRunningTime="2024-12-13 04:11:32.693244708 +0000 UTC m=+25.623723589" watchObservedRunningTime="2024-12-13 04:11:36.751521978 +0000 UTC m=+29.682000799" Dec 13 04:11:36.918257 systemd-networkd[979]: cilium_vxlan: Gained IPv6LL Dec 13 04:11:37.231964 systemd-networkd[979]: lxce898cd873719: Link UP Dec 13 04:11:37.238226 kernel: eth0: renamed from tmp2707f Dec 13 04:11:37.258062 systemd-networkd[979]: lxce898cd873719: Gained carrier Dec 13 04:11:37.259123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce898cd873719: link becomes ready Dec 13 04:11:37.258343 systemd-networkd[979]: lxcbb55ead2297c: Link UP Dec 13 04:11:37.260278 kernel: eth0: renamed from tmp7b65c Dec 13 04:11:37.265618 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbb55ead2297c: link becomes ready Dec 13 04:11:37.266597 systemd-networkd[979]: lxcbb55ead2297c: Gained carrier Dec 13 04:11:37.943301 systemd-networkd[979]: lxc_health: Gained IPv6LL Dec 13 04:11:38.630614 systemd-networkd[979]: lxce898cd873719: Gained IPv6LL Dec 13 04:11:39.014415 systemd-networkd[979]: lxcbb55ead2297c: Gained IPv6LL Dec 13 04:11:41.486764 env[1138]: time="2024-12-13T04:11:41.486602264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:11:41.486764 env[1138]: time="2024-12-13T04:11:41.486682655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:11:41.486764 env[1138]: time="2024-12-13T04:11:41.486702833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:11:41.487613 env[1138]: time="2024-12-13T04:11:41.487553707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b65c4a6e0ea7dff2c345ee62238b895bbc6b53061c4ef032383f48600fed6d2 pid=3059 runtime=io.containerd.runc.v2 Dec 13 04:11:41.517149 systemd[1]: run-containerd-runc-k8s.io-7b65c4a6e0ea7dff2c345ee62238b895bbc6b53061c4ef032383f48600fed6d2-runc.BBgCGg.mount: Deactivated successfully. Dec 13 04:11:41.530284 systemd[1]: Started cri-containerd-7b65c4a6e0ea7dff2c345ee62238b895bbc6b53061c4ef032383f48600fed6d2.scope. Dec 13 04:11:41.581801 env[1138]: time="2024-12-13T04:11:41.581745795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rxd9z,Uid:4044f810-2186-4621-a2e7-46392b8cf71a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b65c4a6e0ea7dff2c345ee62238b895bbc6b53061c4ef032383f48600fed6d2\"" Dec 13 04:11:41.588298 env[1138]: time="2024-12-13T04:11:41.588260694Z" level=info msg="CreateContainer within sandbox \"7b65c4a6e0ea7dff2c345ee62238b895bbc6b53061c4ef032383f48600fed6d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 04:11:41.614015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147437300.mount: Deactivated successfully. Dec 13 04:11:41.636809 env[1138]: time="2024-12-13T04:11:41.636688623Z" level=info msg="CreateContainer within sandbox \"7b65c4a6e0ea7dff2c345ee62238b895bbc6b53061c4ef032383f48600fed6d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57b2c92d112a66cb473bac4af68048790699038e2f295926e0f8c83d8fbc7ea4\"" Dec 13 04:11:41.638405 env[1138]: time="2024-12-13T04:11:41.638375492Z" level=info msg="StartContainer for \"57b2c92d112a66cb473bac4af68048790699038e2f295926e0f8c83d8fbc7ea4\"" Dec 13 04:11:41.658142 systemd[1]: Started cri-containerd-57b2c92d112a66cb473bac4af68048790699038e2f295926e0f8c83d8fbc7ea4.scope. Dec 13 04:11:41.729918 env[1138]: time="2024-12-13T04:11:41.729848797Z" level=info msg="StartContainer for \"57b2c92d112a66cb473bac4af68048790699038e2f295926e0f8c83d8fbc7ea4\" returns successfully" Dec 13 04:11:41.758498 env[1138]: time="2024-12-13T04:11:41.758342587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:11:41.758498 env[1138]: time="2024-12-13T04:11:41.758387411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:11:41.759513 env[1138]: time="2024-12-13T04:11:41.758410344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:11:41.759513 env[1138]: time="2024-12-13T04:11:41.758916389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2707ff9d557d9cec2caad16543fe39521b9a676c36248265c620b0e8a550a177 pid=3134 runtime=io.containerd.runc.v2 Dec 13 04:11:41.776604 systemd[1]: Started cri-containerd-2707ff9d557d9cec2caad16543fe39521b9a676c36248265c620b0e8a550a177.scope. Dec 13 04:11:41.832020 env[1138]: time="2024-12-13T04:11:41.831953337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vklqj,Uid:1d695da8-4ed8-4efd-99d3-a36c648cc50a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2707ff9d557d9cec2caad16543fe39521b9a676c36248265c620b0e8a550a177\"" Dec 13 04:11:41.835405 env[1138]: time="2024-12-13T04:11:41.834324226Z" level=info msg="CreateContainer within sandbox \"2707ff9d557d9cec2caad16543fe39521b9a676c36248265c620b0e8a550a177\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 04:11:41.882884 env[1138]: time="2024-12-13T04:11:41.882803712Z" level=info msg="CreateContainer within sandbox \"2707ff9d557d9cec2caad16543fe39521b9a676c36248265c620b0e8a550a177\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfadda9264bf8010f412870366667d255fa0023f51db42a31eb2f88e68d37f20\"" Dec 13 04:11:41.886156 env[1138]: time="2024-12-13T04:11:41.885105159Z" level=info msg="StartContainer for \"cfadda9264bf8010f412870366667d255fa0023f51db42a31eb2f88e68d37f20\"" Dec 13 04:11:41.918635 systemd[1]: Started cri-containerd-cfadda9264bf8010f412870366667d255fa0023f51db42a31eb2f88e68d37f20.scope. Dec 13 04:11:41.981937 env[1138]: time="2024-12-13T04:11:41.981885175Z" level=info msg="StartContainer for \"cfadda9264bf8010f412870366667d255fa0023f51db42a31eb2f88e68d37f20\" returns successfully" Dec 13 04:11:42.466467 kubelet[1882]: I1213 04:11:42.466314 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rxd9z" podStartSLOduration=31.466284705 podStartE2EDuration="31.466284705s" podCreationTimestamp="2024-12-13 04:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:11:42.464296438 +0000 UTC m=+35.394775329" watchObservedRunningTime="2024-12-13 04:11:42.466284705 +0000 UTC m=+35.396763576" Dec 13 04:11:42.542718 kubelet[1882]: I1213 04:11:42.542660 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vklqj" podStartSLOduration=31.542644569 podStartE2EDuration="31.542644569s" podCreationTimestamp="2024-12-13 04:11:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:11:42.541725638 +0000 UTC m=+35.472204469" watchObservedRunningTime="2024-12-13 04:11:42.542644569 +0000 UTC m=+35.473123390" Dec 13 04:12:15.205032 systemd[1]: Started sshd@5-172.24.4.188:22-172.24.4.1:50028.service. Dec 13 04:12:16.650439 sshd[3227]: Accepted publickey for core from 172.24.4.1 port 50028 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:16.655063 sshd[3227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:16.667578 systemd-logind[1133]: New session 6 of user core. Dec 13 04:12:16.668442 systemd[1]: Started session-6.scope. Dec 13 04:12:17.616690 sshd[3227]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:17.622378 systemd[1]: sshd@5-172.24.4.188:22-172.24.4.1:50028.service: Deactivated successfully. Dec 13 04:12:17.624106 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 04:12:17.625697 systemd-logind[1133]: Session 6 logged out. Waiting for processes to exit. Dec 13 04:12:17.628138 systemd-logind[1133]: Removed session 6. Dec 13 04:12:22.628606 systemd[1]: Started sshd@6-172.24.4.188:22-172.24.4.1:50044.service. Dec 13 04:12:23.647642 sshd[3240]: Accepted publickey for core from 172.24.4.1 port 50044 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:23.651640 sshd[3240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:23.662271 systemd[1]: Started session-7.scope. Dec 13 04:12:23.663005 systemd-logind[1133]: New session 7 of user core. Dec 13 04:12:24.743549 sshd[3240]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:24.750283 systemd[1]: sshd@6-172.24.4.188:22-172.24.4.1:50044.service: Deactivated successfully. Dec 13 04:12:24.752067 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 04:12:24.753510 systemd-logind[1133]: Session 7 logged out. Waiting for processes to exit. Dec 13 04:12:24.755860 systemd-logind[1133]: Removed session 7. Dec 13 04:12:29.758416 systemd[1]: Started sshd@7-172.24.4.188:22-172.24.4.1:48214.service. Dec 13 04:12:31.133036 sshd[3254]: Accepted publickey for core from 172.24.4.1 port 48214 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:31.136296 sshd[3254]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:31.147405 systemd-logind[1133]: New session 8 of user core. Dec 13 04:12:31.148324 systemd[1]: Started session-8.scope. Dec 13 04:12:31.916546 sshd[3254]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:31.923007 systemd[1]: sshd@7-172.24.4.188:22-172.24.4.1:48214.service: Deactivated successfully. Dec 13 04:12:31.924752 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 04:12:31.926184 systemd-logind[1133]: Session 8 logged out. Waiting for processes to exit. Dec 13 04:12:31.928490 systemd-logind[1133]: Removed session 8. Dec 13 04:12:36.926613 systemd[1]: Started sshd@8-172.24.4.188:22-172.24.4.1:39430.service. Dec 13 04:12:38.240524 sshd[3267]: Accepted publickey for core from 172.24.4.1 port 39430 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:38.243316 sshd[3267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:38.257868 systemd-logind[1133]: New session 9 of user core. Dec 13 04:12:38.258745 systemd[1]: Started session-9.scope. Dec 13 04:12:39.033091 sshd[3267]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:39.042462 systemd[1]: Started sshd@9-172.24.4.188:22-172.24.4.1:39438.service. Dec 13 04:12:39.044607 systemd[1]: sshd@8-172.24.4.188:22-172.24.4.1:39430.service: Deactivated successfully. Dec 13 04:12:39.048171 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 04:12:39.051115 systemd-logind[1133]: Session 9 logged out. Waiting for processes to exit. Dec 13 04:12:39.054502 systemd-logind[1133]: Removed session 9. Dec 13 04:12:40.426814 sshd[3280]: Accepted publickey for core from 172.24.4.1 port 39438 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:40.430201 sshd[3280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:40.444773 systemd-logind[1133]: New session 10 of user core. Dec 13 04:12:40.446502 systemd[1]: Started session-10.scope. Dec 13 04:12:41.380720 sshd[3280]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:41.397737 systemd[1]: Started sshd@10-172.24.4.188:22-172.24.4.1:39448.service. Dec 13 04:12:41.402295 systemd[1]: sshd@9-172.24.4.188:22-172.24.4.1:39438.service: Deactivated successfully. Dec 13 04:12:41.405100 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 04:12:41.410799 systemd-logind[1133]: Session 10 logged out. Waiting for processes to exit. Dec 13 04:12:41.414407 systemd-logind[1133]: Removed session 10. Dec 13 04:12:42.627831 sshd[3290]: Accepted publickey for core from 172.24.4.1 port 39448 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:42.631313 sshd[3290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:42.648095 systemd-logind[1133]: New session 11 of user core. Dec 13 04:12:42.649479 systemd[1]: Started session-11.scope. Dec 13 04:12:43.238345 sshd[3290]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:43.244110 systemd-logind[1133]: Session 11 logged out. Waiting for processes to exit. Dec 13 04:12:43.244931 systemd[1]: sshd@10-172.24.4.188:22-172.24.4.1:39448.service: Deactivated successfully. Dec 13 04:12:43.246701 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 04:12:43.248528 systemd-logind[1133]: Removed session 11. Dec 13 04:12:48.249927 systemd[1]: Started sshd@11-172.24.4.188:22-172.24.4.1:49560.service. Dec 13 04:12:49.593404 sshd[3305]: Accepted publickey for core from 172.24.4.1 port 49560 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:49.596111 sshd[3305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:49.610950 systemd-logind[1133]: New session 12 of user core. Dec 13 04:12:49.612091 systemd[1]: Started session-12.scope. Dec 13 04:12:50.332039 sshd[3305]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:50.339534 systemd[1]: sshd@11-172.24.4.188:22-172.24.4.1:49560.service: Deactivated successfully. Dec 13 04:12:50.341784 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 04:12:50.343543 systemd-logind[1133]: Session 12 logged out. Waiting for processes to exit. Dec 13 04:12:50.349586 systemd[1]: Started sshd@12-172.24.4.188:22-172.24.4.1:49574.service. Dec 13 04:12:50.353590 systemd-logind[1133]: Removed session 12. Dec 13 04:12:51.508366 sshd[3317]: Accepted publickey for core from 172.24.4.1 port 49574 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:51.511007 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:51.525637 systemd-logind[1133]: New session 13 of user core. Dec 13 04:12:51.526626 systemd[1]: Started session-13.scope. Dec 13 04:12:52.839056 sshd[3317]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:52.848283 systemd[1]: Started sshd@13-172.24.4.188:22-172.24.4.1:49588.service. Dec 13 04:12:52.860132 systemd[1]: sshd@12-172.24.4.188:22-172.24.4.1:49574.service: Deactivated successfully. Dec 13 04:12:52.862089 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 04:12:52.863435 systemd-logind[1133]: Session 13 logged out. Waiting for processes to exit. Dec 13 04:12:52.866525 systemd-logind[1133]: Removed session 13. Dec 13 04:12:54.188383 sshd[3326]: Accepted publickey for core from 172.24.4.1 port 49588 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:54.191417 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:54.209328 systemd-logind[1133]: New session 14 of user core. Dec 13 04:12:54.211493 systemd[1]: Started session-14.scope. Dec 13 04:12:57.056906 sshd[3326]: pam_unix(sshd:session): session closed for user core Dec 13 04:12:57.066039 systemd[1]: sshd@13-172.24.4.188:22-172.24.4.1:49588.service: Deactivated successfully. Dec 13 04:12:57.068632 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 04:12:57.075450 systemd[1]: Started sshd@14-172.24.4.188:22-172.24.4.1:45922.service. Dec 13 04:12:57.077746 systemd-logind[1133]: Session 14 logged out. Waiting for processes to exit. Dec 13 04:12:57.081568 systemd-logind[1133]: Removed session 14. Dec 13 04:12:58.488177 sshd[3344]: Accepted publickey for core from 172.24.4.1 port 45922 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:12:58.490965 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:12:58.503060 systemd[1]: Started session-15.scope. Dec 13 04:12:58.504656 systemd-logind[1133]: New session 15 of user core. Dec 13 04:13:00.097707 sshd[3344]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:00.105435 systemd[1]: Started sshd@15-172.24.4.188:22-172.24.4.1:45938.service. Dec 13 04:13:00.115335 systemd[1]: sshd@14-172.24.4.188:22-172.24.4.1:45922.service: Deactivated successfully. Dec 13 04:13:00.116773 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 04:13:00.122006 systemd-logind[1133]: Session 15 logged out. Waiting for processes to exit. Dec 13 04:13:00.124319 systemd-logind[1133]: Removed session 15. Dec 13 04:13:01.503942 sshd[3353]: Accepted publickey for core from 172.24.4.1 port 45938 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:01.507389 sshd[3353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:01.517393 systemd-logind[1133]: New session 16 of user core. Dec 13 04:13:01.518114 systemd[1]: Started session-16.scope. Dec 13 04:13:02.283749 sshd[3353]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:02.289020 systemd[1]: sshd@15-172.24.4.188:22-172.24.4.1:45938.service: Deactivated successfully. Dec 13 04:13:02.290572 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 04:13:02.292113 systemd-logind[1133]: Session 16 logged out. Waiting for processes to exit. Dec 13 04:13:02.294841 systemd-logind[1133]: Removed session 16. Dec 13 04:13:07.294612 systemd[1]: Started sshd@16-172.24.4.188:22-172.24.4.1:55618.service. Dec 13 04:13:08.497519 sshd[3371]: Accepted publickey for core from 172.24.4.1 port 55618 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:08.501167 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:08.513165 systemd-logind[1133]: New session 17 of user core. Dec 13 04:13:08.514257 systemd[1]: Started session-17.scope. Dec 13 04:13:09.276657 sshd[3371]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:09.282331 systemd[1]: sshd@16-172.24.4.188:22-172.24.4.1:55618.service: Deactivated successfully. Dec 13 04:13:09.284042 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 04:13:09.285494 systemd-logind[1133]: Session 17 logged out. Waiting for processes to exit. Dec 13 04:13:09.287382 systemd-logind[1133]: Removed session 17. Dec 13 04:13:14.288085 systemd[1]: Started sshd@17-172.24.4.188:22-172.24.4.1:55628.service. Dec 13 04:13:15.496182 sshd[3386]: Accepted publickey for core from 172.24.4.1 port 55628 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:15.499104 sshd[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:15.511404 systemd-logind[1133]: New session 18 of user core. Dec 13 04:13:15.512392 systemd[1]: Started session-18.scope. Dec 13 04:13:16.407064 sshd[3386]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:16.412987 systemd[1]: sshd@17-172.24.4.188:22-172.24.4.1:55628.service: Deactivated successfully. Dec 13 04:13:16.414368 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 04:13:16.416614 systemd-logind[1133]: Session 18 logged out. Waiting for processes to exit. Dec 13 04:13:16.419288 systemd-logind[1133]: Removed session 18. Dec 13 04:13:21.418622 systemd[1]: Started sshd@18-172.24.4.188:22-172.24.4.1:42962.service. Dec 13 04:13:22.621546 sshd[3397]: Accepted publickey for core from 172.24.4.1 port 42962 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:22.625350 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:22.637990 systemd-logind[1133]: New session 19 of user core. Dec 13 04:13:22.639598 systemd[1]: Started session-19.scope. Dec 13 04:13:23.407367 sshd[3397]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:23.415443 systemd[1]: Started sshd@19-172.24.4.188:22-172.24.4.1:42966.service. Dec 13 04:13:23.416729 systemd[1]: sshd@18-172.24.4.188:22-172.24.4.1:42962.service: Deactivated successfully. Dec 13 04:13:23.420383 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 04:13:23.423465 systemd-logind[1133]: Session 19 logged out. Waiting for processes to exit. Dec 13 04:13:23.427593 systemd-logind[1133]: Removed session 19. Dec 13 04:13:24.662731 sshd[3408]: Accepted publickey for core from 172.24.4.1 port 42966 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:24.664277 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:24.675411 systemd-logind[1133]: New session 20 of user core. Dec 13 04:13:24.678130 systemd[1]: Started session-20.scope. Dec 13 04:13:27.293629 env[1138]: time="2024-12-13T04:13:27.293565795Z" level=info msg="StopContainer for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" with timeout 30 (s)" Dec 13 04:13:27.294312 env[1138]: time="2024-12-13T04:13:27.294190932Z" level=info msg="Stop container \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" with signal terminated" Dec 13 04:13:27.329157 systemd[1]: cri-containerd-23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947.scope: Deactivated successfully. Dec 13 04:13:27.334715 env[1138]: time="2024-12-13T04:13:27.334593402Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:13:27.352168 env[1138]: time="2024-12-13T04:13:27.352102091Z" level=info msg="StopContainer for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" with timeout 2 (s)" Dec 13 04:13:27.352620 env[1138]: time="2024-12-13T04:13:27.352579941Z" level=info msg="Stop container \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" with signal terminated" Dec 13 04:13:27.357745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947-rootfs.mount: Deactivated successfully. Dec 13 04:13:27.369331 systemd-networkd[979]: lxc_health: Link DOWN Dec 13 04:13:27.369342 systemd-networkd[979]: lxc_health: Lost carrier Dec 13 04:13:27.370924 env[1138]: time="2024-12-13T04:13:27.370880589Z" level=info msg="shim disconnected" id=23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947 Dec 13 04:13:27.371071 env[1138]: time="2024-12-13T04:13:27.371052002Z" level=warning msg="cleaning up after shim disconnected" id=23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947 namespace=k8s.io Dec 13 04:13:27.371150 env[1138]: time="2024-12-13T04:13:27.371125541Z" level=info msg="cleaning up dead shim" Dec 13 04:13:27.397607 kubelet[1882]: E1213 04:13:27.394269 1882 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:13:27.400967 env[1138]: time="2024-12-13T04:13:27.400467477Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3461 runtime=io.containerd.runc.v2\n" Dec 13 04:13:27.404479 env[1138]: time="2024-12-13T04:13:27.404442878Z" level=info msg="StopContainer for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" returns successfully" Dec 13 04:13:27.405651 env[1138]: time="2024-12-13T04:13:27.405630774Z" level=info msg="StopPodSandbox for \"2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8\"" Dec 13 04:13:27.405811 env[1138]: time="2024-12-13T04:13:27.405779424Z" level=info msg="Container to stop \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.408338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8-shm.mount: Deactivated successfully. Dec 13 04:13:27.412694 systemd[1]: cri-containerd-3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f.scope: Deactivated successfully. Dec 13 04:13:27.412933 systemd[1]: cri-containerd-3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f.scope: Consumed 9.084s CPU time. Dec 13 04:13:27.426441 systemd[1]: cri-containerd-2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8.scope: Deactivated successfully. Dec 13 04:13:27.452105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f-rootfs.mount: Deactivated successfully. Dec 13 04:13:27.457032 env[1138]: time="2024-12-13T04:13:27.456976288Z" level=info msg="shim disconnected" id=3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f Dec 13 04:13:27.457293 env[1138]: time="2024-12-13T04:13:27.457273707Z" level=warning msg="cleaning up after shim disconnected" id=3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f namespace=k8s.io Dec 13 04:13:27.457884 env[1138]: time="2024-12-13T04:13:27.457384807Z" level=info msg="cleaning up dead shim" Dec 13 04:13:27.476934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8-rootfs.mount: Deactivated successfully. Dec 13 04:13:27.478453 env[1138]: time="2024-12-13T04:13:27.478404843Z" level=info msg="shim disconnected" id=2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8 Dec 13 04:13:27.479952 env[1138]: time="2024-12-13T04:13:27.479927659Z" level=warning msg="cleaning up after shim disconnected" id=2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8 namespace=k8s.io Dec 13 04:13:27.480078 env[1138]: time="2024-12-13T04:13:27.480061871Z" level=info msg="cleaning up dead shim" Dec 13 04:13:27.481958 env[1138]: time="2024-12-13T04:13:27.481935387Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3503 runtime=io.containerd.runc.v2\n" Dec 13 04:13:27.485544 env[1138]: time="2024-12-13T04:13:27.485503522Z" level=info msg="StopContainer for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" returns successfully" Dec 13 04:13:27.486433 env[1138]: time="2024-12-13T04:13:27.486411420Z" level=info msg="StopPodSandbox for \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\"" Dec 13 04:13:27.486605 env[1138]: time="2024-12-13T04:13:27.486574627Z" level=info msg="Container to stop \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.486799 env[1138]: time="2024-12-13T04:13:27.486693902Z" level=info msg="Container to stop \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.486900 env[1138]: time="2024-12-13T04:13:27.486881615Z" level=info msg="Container to stop \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.487028 env[1138]: time="2024-12-13T04:13:27.487010277Z" level=info msg="Container to stop \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.487118 env[1138]: time="2024-12-13T04:13:27.487100627Z" level=info msg="Container to stop \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:27.492266 env[1138]: time="2024-12-13T04:13:27.492212717Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3521 runtime=io.containerd.runc.v2\n" Dec 13 04:13:27.492847 env[1138]: time="2024-12-13T04:13:27.492821403Z" level=info msg="TearDown network for sandbox \"2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8\" successfully" Dec 13 04:13:27.492946 env[1138]: time="2024-12-13T04:13:27.492928084Z" level=info msg="StopPodSandbox for \"2f345f095c3fc50f5dc041be949fbb7fc55fdd1977e2aa803eb6e8270e3cddb8\" returns successfully" Dec 13 04:13:27.497010 systemd[1]: cri-containerd-db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b.scope: Deactivated successfully. Dec 13 04:13:27.538984 env[1138]: time="2024-12-13T04:13:27.538909804Z" level=info msg="shim disconnected" id=db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b Dec 13 04:13:27.538984 env[1138]: time="2024-12-13T04:13:27.538981498Z" level=warning msg="cleaning up after shim disconnected" id=db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b namespace=k8s.io Dec 13 04:13:27.539270 env[1138]: time="2024-12-13T04:13:27.538996317Z" level=info msg="cleaning up dead shim" Dec 13 04:13:27.551089 env[1138]: time="2024-12-13T04:13:27.550931787Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3554 runtime=io.containerd.runc.v2\n" Dec 13 04:13:27.557815 env[1138]: time="2024-12-13T04:13:27.555581828Z" level=info msg="TearDown network for sandbox \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" successfully" Dec 13 04:13:27.557815 env[1138]: time="2024-12-13T04:13:27.555619639Z" level=info msg="StopPodSandbox for \"db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b\" returns successfully" Dec 13 04:13:27.669386 kubelet[1882]: I1213 04:13:27.669326 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrblz\" (UniqueName: \"kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-kube-api-access-zrblz\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669386 kubelet[1882]: I1213 04:13:27.669387 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-run\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669636 kubelet[1882]: I1213 04:13:27.669417 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-net\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669636 kubelet[1882]: I1213 04:13:27.669436 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cni-path\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669636 kubelet[1882]: I1213 04:13:27.669460 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crsjc\" (UniqueName: \"kubernetes.io/projected/82ed48ee-ea06-445c-9d0e-5263df564047-kube-api-access-crsjc\") pod \"82ed48ee-ea06-445c-9d0e-5263df564047\" (UID: \"82ed48ee-ea06-445c-9d0e-5263df564047\") " Dec 13 04:13:27.669636 kubelet[1882]: I1213 04:13:27.669479 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-kernel\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669636 kubelet[1882]: I1213 04:13:27.669497 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-xtables-lock\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669636 kubelet[1882]: I1213 04:13:27.669531 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-hubble-tls\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669817 kubelet[1882]: I1213 04:13:27.669553 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-bpf-maps\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669817 kubelet[1882]: I1213 04:13:27.669569 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-hostproc\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669817 kubelet[1882]: I1213 04:13:27.669594 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ed48ee-ea06-445c-9d0e-5263df564047-cilium-config-path\") pod \"82ed48ee-ea06-445c-9d0e-5263df564047\" (UID: \"82ed48ee-ea06-445c-9d0e-5263df564047\") " Dec 13 04:13:27.669817 kubelet[1882]: I1213 04:13:27.669612 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-etc-cni-netd\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669817 kubelet[1882]: I1213 04:13:27.669629 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-cgroup\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669817 kubelet[1882]: I1213 04:13:27.669648 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-lib-modules\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669979 kubelet[1882]: I1213 04:13:27.669673 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3912dd89-8293-4ed1-a56c-e962a5601092-clustermesh-secrets\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.669979 kubelet[1882]: I1213 04:13:27.669693 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-config-path\") pod \"3912dd89-8293-4ed1-a56c-e962a5601092\" (UID: \"3912dd89-8293-4ed1-a56c-e962a5601092\") " Dec 13 04:13:27.681838 kubelet[1882]: I1213 04:13:27.678737 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.682143 kubelet[1882]: I1213 04:13:27.682125 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.682290 kubelet[1882]: I1213 04:13:27.682274 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cni-path" (OuterVolumeSpecName: "cni-path") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684417 kubelet[1882]: I1213 04:13:27.684359 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684417 kubelet[1882]: I1213 04:13:27.684421 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684562 kubelet[1882]: I1213 04:13:27.678738 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684562 kubelet[1882]: I1213 04:13:27.681843 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-hostproc" (OuterVolumeSpecName: "hostproc") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684562 kubelet[1882]: I1213 04:13:27.684451 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:13:27.684562 kubelet[1882]: I1213 04:13:27.684530 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:27.684698 kubelet[1882]: I1213 04:13:27.684593 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-kube-api-access-zrblz" (OuterVolumeSpecName: "kube-api-access-zrblz") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "kube-api-access-zrblz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:27.684698 kubelet[1882]: I1213 04:13:27.684620 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684698 kubelet[1882]: I1213 04:13:27.684637 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.684698 kubelet[1882]: I1213 04:13:27.684656 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:27.685297 kubelet[1882]: I1213 04:13:27.684947 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ed48ee-ea06-445c-9d0e-5263df564047-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82ed48ee-ea06-445c-9d0e-5263df564047" (UID: "82ed48ee-ea06-445c-9d0e-5263df564047"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:13:27.686451 kubelet[1882]: I1213 04:13:27.686429 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ed48ee-ea06-445c-9d0e-5263df564047-kube-api-access-crsjc" (OuterVolumeSpecName: "kube-api-access-crsjc") pod "82ed48ee-ea06-445c-9d0e-5263df564047" (UID: "82ed48ee-ea06-445c-9d0e-5263df564047"). InnerVolumeSpecName "kube-api-access-crsjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:27.687584 kubelet[1882]: I1213 04:13:27.687531 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3912dd89-8293-4ed1-a56c-e962a5601092-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3912dd89-8293-4ed1-a56c-e962a5601092" (UID: "3912dd89-8293-4ed1-a56c-e962a5601092"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:13:27.773264 kubelet[1882]: I1213 04:13:27.773180 1882 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-hubble-tls\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.773586 kubelet[1882]: I1213 04:13:27.773556 1882 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-bpf-maps\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.773883 kubelet[1882]: I1213 04:13:27.773825 1882 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-hostproc\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.774003 kubelet[1882]: I1213 04:13:27.773897 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82ed48ee-ea06-445c-9d0e-5263df564047-cilium-config-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.774003 kubelet[1882]: I1213 04:13:27.773933 1882 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-etc-cni-netd\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.774003 kubelet[1882]: I1213 04:13:27.773966 1882 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-lib-modules\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.774003 kubelet[1882]: I1213 04:13:27.773995 1882 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3912dd89-8293-4ed1-a56c-e962a5601092-clustermesh-secrets\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774023 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-config-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774051 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-cgroup\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774078 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cilium-run\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774103 1882 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zrblz\" (UniqueName: \"kubernetes.io/projected/3912dd89-8293-4ed1-a56c-e962a5601092-kube-api-access-zrblz\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774134 1882 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-net\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774163 1882 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-crsjc\" (UniqueName: \"kubernetes.io/projected/82ed48ee-ea06-445c-9d0e-5263df564047-kube-api-access-crsjc\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.775051 kubelet[1882]: I1213 04:13:27.774194 1882 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-host-proc-sys-kernel\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.776053 kubelet[1882]: I1213 04:13:27.774297 1882 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-xtables-lock\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.776053 kubelet[1882]: I1213 04:13:27.774327 1882 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3912dd89-8293-4ed1-a56c-e962a5601092-cni-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:27.817910 systemd[1]: Removed slice kubepods-besteffort-pod82ed48ee_ea06_445c_9d0e_5263df564047.slice. Dec 13 04:13:27.854567 systemd[1]: Removed slice kubepods-burstable-pod3912dd89_8293_4ed1_a56c_e962a5601092.slice. Dec 13 04:13:27.854868 systemd[1]: kubepods-burstable-pod3912dd89_8293_4ed1_a56c_e962a5601092.slice: Consumed 9.224s CPU time. Dec 13 04:13:27.864087 kubelet[1882]: I1213 04:13:27.864043 1882 scope.go:117] "RemoveContainer" containerID="23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947" Dec 13 04:13:27.900740 env[1138]: time="2024-12-13T04:13:27.899584657Z" level=info msg="RemoveContainer for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\"" Dec 13 04:13:27.908192 env[1138]: time="2024-12-13T04:13:27.908007037Z" level=info msg="RemoveContainer for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" returns successfully" Dec 13 04:13:27.908668 kubelet[1882]: I1213 04:13:27.908646 1882 scope.go:117] "RemoveContainer" containerID="23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947" Dec 13 04:13:27.910544 env[1138]: time="2024-12-13T04:13:27.909367998Z" level=error msg="ContainerStatus for \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\": not found" Dec 13 04:13:27.911090 kubelet[1882]: E1213 04:13:27.911049 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\": not found" containerID="23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947" Dec 13 04:13:27.912717 kubelet[1882]: I1213 04:13:27.911178 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947"} err="failed to get container status \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\": rpc error: code = NotFound desc = an error occurred when try to find container \"23ddc1338b479b973a24ed0eb184faf069c6f4b07e985a0fda87e8a3d2f94947\": not found" Dec 13 04:13:27.912813 kubelet[1882]: I1213 04:13:27.912799 1882 scope.go:117] "RemoveContainer" containerID="3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f" Dec 13 04:13:27.918678 env[1138]: time="2024-12-13T04:13:27.918594331Z" level=info msg="RemoveContainer for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\"" Dec 13 04:13:27.927946 env[1138]: time="2024-12-13T04:13:27.927880806Z" level=info msg="RemoveContainer for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" returns successfully" Dec 13 04:13:27.928222 kubelet[1882]: I1213 04:13:27.928166 1882 scope.go:117] "RemoveContainer" containerID="060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e" Dec 13 04:13:27.930336 env[1138]: time="2024-12-13T04:13:27.929898664Z" level=info msg="RemoveContainer for \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\"" Dec 13 04:13:27.933823 env[1138]: time="2024-12-13T04:13:27.933791931Z" level=info msg="RemoveContainer for \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\" returns successfully" Dec 13 04:13:27.933997 kubelet[1882]: I1213 04:13:27.933965 1882 scope.go:117] "RemoveContainer" containerID="bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e" Dec 13 04:13:27.935764 env[1138]: time="2024-12-13T04:13:27.935328081Z" level=info msg="RemoveContainer for \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\"" Dec 13 04:13:27.939279 env[1138]: time="2024-12-13T04:13:27.939252786Z" level=info msg="RemoveContainer for \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\" returns successfully" Dec 13 04:13:27.939595 kubelet[1882]: I1213 04:13:27.939573 1882 scope.go:117] "RemoveContainer" containerID="9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89" Dec 13 04:13:27.940782 env[1138]: time="2024-12-13T04:13:27.940752770Z" level=info msg="RemoveContainer for \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\"" Dec 13 04:13:27.944299 env[1138]: time="2024-12-13T04:13:27.944259508Z" level=info msg="RemoveContainer for \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\" returns successfully" Dec 13 04:13:27.944500 kubelet[1882]: I1213 04:13:27.944484 1882 scope.go:117] "RemoveContainer" containerID="fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47" Dec 13 04:13:27.945776 env[1138]: time="2024-12-13T04:13:27.945743512Z" level=info msg="RemoveContainer for \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\"" Dec 13 04:13:27.949053 env[1138]: time="2024-12-13T04:13:27.949023573Z" level=info msg="RemoveContainer for \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\" returns successfully" Dec 13 04:13:27.949246 kubelet[1882]: I1213 04:13:27.949229 1882 scope.go:117] "RemoveContainer" containerID="3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f" Dec 13 04:13:27.949534 env[1138]: time="2024-12-13T04:13:27.949469913Z" level=error msg="ContainerStatus for \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\": not found" Dec 13 04:13:27.949693 kubelet[1882]: E1213 04:13:27.949672 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\": not found" containerID="3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f" Dec 13 04:13:27.949793 kubelet[1882]: I1213 04:13:27.949767 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f"} err="failed to get container status \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fe45818c6a0c5dfd9da44a4a8c4687e1033462c312141461edeacec561d652f\": not found" Dec 13 04:13:27.949860 kubelet[1882]: I1213 04:13:27.949848 1882 scope.go:117] "RemoveContainer" containerID="060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e" Dec 13 04:13:27.950108 env[1138]: time="2024-12-13T04:13:27.950061176Z" level=error msg="ContainerStatus for \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\": not found" Dec 13 04:13:27.950281 kubelet[1882]: E1213 04:13:27.950264 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\": not found" containerID="060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e" Dec 13 04:13:27.950366 kubelet[1882]: I1213 04:13:27.950348 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e"} err="failed to get container status \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"060e385305a4f482af891db0c24afdb14b6166820215733a7bb76f6df6217e7e\": not found" Dec 13 04:13:27.950434 kubelet[1882]: I1213 04:13:27.950424 1882 scope.go:117] "RemoveContainer" containerID="bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e" Dec 13 04:13:27.950676 env[1138]: time="2024-12-13T04:13:27.950630718Z" level=error msg="ContainerStatus for \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\": not found" Dec 13 04:13:27.950820 kubelet[1882]: E1213 04:13:27.950804 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\": not found" containerID="bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e" Dec 13 04:13:27.950899 kubelet[1882]: I1213 04:13:27.950883 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e"} err="failed to get container status \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcd71828f4eb37117edede672b5daf1ad25aa287ef614b44aa16b10a7d81207e\": not found" Dec 13 04:13:27.950963 kubelet[1882]: I1213 04:13:27.950953 1882 scope.go:117] "RemoveContainer" containerID="9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89" Dec 13 04:13:27.951223 env[1138]: time="2024-12-13T04:13:27.951159062Z" level=error msg="ContainerStatus for \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\": not found" Dec 13 04:13:27.951357 kubelet[1882]: E1213 04:13:27.951342 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\": not found" containerID="9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89" Dec 13 04:13:27.951431 kubelet[1882]: I1213 04:13:27.951415 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89"} err="failed to get container status \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c4bc6f18a5cd331336d571ba5137ef27fbf3b4e4e277ba5421c1b97f85ebe89\": not found" Dec 13 04:13:27.951509 kubelet[1882]: I1213 04:13:27.951498 1882 scope.go:117] "RemoveContainer" containerID="fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47" Dec 13 04:13:27.951763 env[1138]: time="2024-12-13T04:13:27.951718676Z" level=error msg="ContainerStatus for \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\": not found" Dec 13 04:13:27.951900 kubelet[1882]: E1213 04:13:27.951884 1882 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\": not found" containerID="fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47" Dec 13 04:13:27.951979 kubelet[1882]: I1213 04:13:27.951962 1882 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47"} err="failed to get container status \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb86a8bb0bf50a1189383aa1c758d4bdf132c2c9b5edc8f323f7f67ef8e31b47\": not found" Dec 13 04:13:28.275676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b-rootfs.mount: Deactivated successfully. Dec 13 04:13:28.276351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db96a5c42032bf1ca621b8b67d5bb012687c3c6ad5e58fd181744694f016d96b-shm.mount: Deactivated successfully. Dec 13 04:13:28.276714 systemd[1]: var-lib-kubelet-pods-3912dd89\x2d8293\x2d4ed1\x2da56c\x2de962a5601092-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrblz.mount: Deactivated successfully. Dec 13 04:13:28.277077 systemd[1]: var-lib-kubelet-pods-3912dd89\x2d8293\x2d4ed1\x2da56c\x2de962a5601092-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:13:28.277588 systemd[1]: var-lib-kubelet-pods-3912dd89\x2d8293\x2d4ed1\x2da56c\x2de962a5601092-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:13:28.277961 systemd[1]: var-lib-kubelet-pods-82ed48ee\x2dea06\x2d445c\x2d9d0e\x2d5263df564047-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcrsjc.mount: Deactivated successfully. Dec 13 04:13:29.231273 kubelet[1882]: I1213 04:13:29.231138 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" path="/var/lib/kubelet/pods/3912dd89-8293-4ed1-a56c-e962a5601092/volumes" Dec 13 04:13:29.234119 kubelet[1882]: I1213 04:13:29.234012 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ed48ee-ea06-445c-9d0e-5263df564047" path="/var/lib/kubelet/pods/82ed48ee-ea06-445c-9d0e-5263df564047/volumes" Dec 13 04:13:29.395426 sshd[3408]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:29.402020 systemd[1]: Started sshd@20-172.24.4.188:22-172.24.4.1:45306.service. Dec 13 04:13:29.406949 systemd[1]: sshd@19-172.24.4.188:22-172.24.4.1:42966.service: Deactivated successfully. Dec 13 04:13:29.409162 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 04:13:29.409581 systemd[1]: session-20.scope: Consumed 1.285s CPU time. Dec 13 04:13:29.411113 systemd-logind[1133]: Session 20 logged out. Waiting for processes to exit. Dec 13 04:13:29.415969 systemd-logind[1133]: Removed session 20. Dec 13 04:13:30.254009 kubelet[1882]: I1213 04:13:30.253391 1882 setters.go:600] "Node became not ready" node="ci-3510-3-6-e-a81afd2c25.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T04:13:30Z","lastTransitionTime":"2024-12-13T04:13:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 04:13:30.608237 sshd[3573]: Accepted publickey for core from 172.24.4.1 port 45306 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:30.611614 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:30.623264 systemd[1]: Started session-21.scope. Dec 13 04:13:30.623947 systemd-logind[1133]: New session 21 of user core. Dec 13 04:13:32.151198 kubelet[1882]: E1213 04:13:32.151166 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" containerName="mount-cgroup" Dec 13 04:13:32.151775 kubelet[1882]: E1213 04:13:32.151760 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" containerName="apply-sysctl-overwrites" Dec 13 04:13:32.151876 kubelet[1882]: E1213 04:13:32.151863 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" containerName="mount-bpf-fs" Dec 13 04:13:32.151963 kubelet[1882]: E1213 04:13:32.151951 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" containerName="clean-cilium-state" Dec 13 04:13:32.152054 kubelet[1882]: E1213 04:13:32.152043 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" containerName="cilium-agent" Dec 13 04:13:32.152153 kubelet[1882]: E1213 04:13:32.152138 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="82ed48ee-ea06-445c-9d0e-5263df564047" containerName="cilium-operator" Dec 13 04:13:32.152312 kubelet[1882]: I1213 04:13:32.152286 1882 memory_manager.go:354] "RemoveStaleState removing state" podUID="3912dd89-8293-4ed1-a56c-e962a5601092" containerName="cilium-agent" Dec 13 04:13:32.152419 kubelet[1882]: I1213 04:13:32.152407 1882 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ed48ee-ea06-445c-9d0e-5263df564047" containerName="cilium-operator" Dec 13 04:13:32.172919 systemd[1]: Created slice kubepods-burstable-podb5098419_cfbd_4f81_ae63_1cf152dffdba.slice. Dec 13 04:13:32.282196 sshd[3573]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:32.289192 systemd[1]: sshd@20-172.24.4.188:22-172.24.4.1:45306.service: Deactivated successfully. Dec 13 04:13:32.290782 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 04:13:32.291318 systemd[1]: session-21.scope: Consumed 1.141s CPU time. Dec 13 04:13:32.292196 systemd-logind[1133]: Session 21 logged out. Waiting for processes to exit. Dec 13 04:13:32.295540 systemd[1]: Started sshd@21-172.24.4.188:22-172.24.4.1:45316.service. Dec 13 04:13:32.299063 systemd-logind[1133]: Removed session 21. Dec 13 04:13:32.314786 kubelet[1882]: I1213 04:13:32.314718 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-clustermesh-secrets\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.315162 kubelet[1882]: I1213 04:13:32.315125 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-net\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.315441 kubelet[1882]: I1213 04:13:32.315382 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-bpf-maps\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.315722 kubelet[1882]: I1213 04:13:32.315660 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-etc-cni-netd\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.315988 kubelet[1882]: I1213 04:13:32.315929 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-cgroup\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.316266 kubelet[1882]: I1213 04:13:32.316168 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cni-path\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.316581 kubelet[1882]: I1213 04:13:32.316541 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-config-path\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.316875 kubelet[1882]: I1213 04:13:32.316840 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-lib-modules\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.317189 kubelet[1882]: I1213 04:13:32.317137 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-ipsec-secrets\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.317537 kubelet[1882]: I1213 04:13:32.317503 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-hostproc\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.317970 kubelet[1882]: I1213 04:13:32.317859 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-xtables-lock\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.318532 kubelet[1882]: I1213 04:13:32.318411 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7mff\" (UniqueName: \"kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-kube-api-access-g7mff\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.319088 kubelet[1882]: I1213 04:13:32.319005 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-run\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.319609 kubelet[1882]: I1213 04:13:32.319475 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-kernel\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.319945 kubelet[1882]: I1213 04:13:32.319909 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-hubble-tls\") pod \"cilium-5rtmt\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " pod="kube-system/cilium-5rtmt" Dec 13 04:13:32.400010 kubelet[1882]: E1213 04:13:32.399879 1882 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:13:32.483855 env[1138]: time="2024-12-13T04:13:32.482537447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rtmt,Uid:b5098419-cfbd-4f81-ae63-1cf152dffdba,Namespace:kube-system,Attempt:0,}" Dec 13 04:13:32.512366 env[1138]: time="2024-12-13T04:13:32.512305047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:13:32.512578 env[1138]: time="2024-12-13T04:13:32.512553654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:13:32.512694 env[1138]: time="2024-12-13T04:13:32.512672849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:13:32.513067 env[1138]: time="2024-12-13T04:13:32.513037214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8 pid=3598 runtime=io.containerd.runc.v2 Dec 13 04:13:32.536784 systemd[1]: Started cri-containerd-2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8.scope. Dec 13 04:13:32.575075 env[1138]: time="2024-12-13T04:13:32.575014338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rtmt,Uid:b5098419-cfbd-4f81-ae63-1cf152dffdba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\"" Dec 13 04:13:32.580648 env[1138]: time="2024-12-13T04:13:32.580593799Z" level=info msg="CreateContainer within sandbox \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:13:32.596493 env[1138]: time="2024-12-13T04:13:32.596441072Z" level=info msg="CreateContainer within sandbox \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\"" Dec 13 04:13:32.599076 env[1138]: time="2024-12-13T04:13:32.599030164Z" level=info msg="StartContainer for \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\"" Dec 13 04:13:32.618919 systemd[1]: Started cri-containerd-9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f.scope. Dec 13 04:13:32.635654 systemd[1]: cri-containerd-9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f.scope: Deactivated successfully. Dec 13 04:13:32.661151 env[1138]: time="2024-12-13T04:13:32.661050821Z" level=info msg="shim disconnected" id=9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f Dec 13 04:13:32.661151 env[1138]: time="2024-12-13T04:13:32.661135930Z" level=warning msg="cleaning up after shim disconnected" id=9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f namespace=k8s.io Dec 13 04:13:32.661501 env[1138]: time="2024-12-13T04:13:32.661165736Z" level=info msg="cleaning up dead shim" Dec 13 04:13:32.676092 env[1138]: time="2024-12-13T04:13:32.676031232Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:13:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:13:32.676791 env[1138]: time="2024-12-13T04:13:32.676653944Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Dec 13 04:13:32.677743 env[1138]: time="2024-12-13T04:13:32.677668062Z" level=error msg="Failed to pipe stderr of container \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\"" error="reading from a closed fifo" Dec 13 04:13:32.677830 env[1138]: time="2024-12-13T04:13:32.677801343Z" level=error msg="Failed to pipe stdout of container \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\"" error="reading from a closed fifo" Dec 13 04:13:32.690944 env[1138]: time="2024-12-13T04:13:32.689711226Z" level=error msg="StartContainer for \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:13:32.691773 kubelet[1882]: E1213 04:13:32.691516 1882 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f" Dec 13 04:13:32.724261 kubelet[1882]: E1213 04:13:32.724146 1882 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 04:13:32.724261 kubelet[1882]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:13:32.724261 kubelet[1882]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:13:32.724261 kubelet[1882]: rm /hostbin/cilium-mount Dec 13 04:13:32.724702 kubelet[1882]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7mff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5rtmt_kube-system(b5098419-cfbd-4f81-ae63-1cf152dffdba): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:13:32.724702 kubelet[1882]: > logger="UnhandledError" Dec 13 04:13:32.726385 kubelet[1882]: E1213 04:13:32.726289 1882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rtmt" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" Dec 13 04:13:32.875439 env[1138]: time="2024-12-13T04:13:32.870023865Z" level=info msg="CreateContainer within sandbox \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 04:13:32.898329 env[1138]: time="2024-12-13T04:13:32.898173801Z" level=info msg="CreateContainer within sandbox \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\"" Dec 13 04:13:32.901806 env[1138]: time="2024-12-13T04:13:32.901715315Z" level=info msg="StartContainer for \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\"" Dec 13 04:13:32.943700 systemd[1]: Started cri-containerd-5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0.scope. Dec 13 04:13:32.954458 systemd[1]: cri-containerd-5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0.scope: Deactivated successfully. Dec 13 04:13:32.954665 systemd[1]: Stopped cri-containerd-5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0.scope. Dec 13 04:13:32.966385 env[1138]: time="2024-12-13T04:13:32.966315085Z" level=info msg="shim disconnected" id=5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0 Dec 13 04:13:32.966576 env[1138]: time="2024-12-13T04:13:32.966384847Z" level=warning msg="cleaning up after shim disconnected" id=5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0 namespace=k8s.io Dec 13 04:13:32.966576 env[1138]: time="2024-12-13T04:13:32.966397681Z" level=info msg="cleaning up dead shim" Dec 13 04:13:32.975078 env[1138]: time="2024-12-13T04:13:32.975009848Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3695 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:13:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:13:32.975422 env[1138]: time="2024-12-13T04:13:32.975348656Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Dec 13 04:13:32.975656 env[1138]: time="2024-12-13T04:13:32.975615899Z" level=error msg="Failed to pipe stdout of container \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\"" error="reading from a closed fifo" Dec 13 04:13:32.976365 env[1138]: time="2024-12-13T04:13:32.976299425Z" level=error msg="Failed to pipe stderr of container \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\"" error="reading from a closed fifo" Dec 13 04:13:32.979854 env[1138]: time="2024-12-13T04:13:32.979805734Z" level=error msg="StartContainer for \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:13:32.980104 kubelet[1882]: E1213 04:13:32.980051 1882 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0" Dec 13 04:13:32.980302 kubelet[1882]: E1213 04:13:32.980236 1882 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 04:13:32.980302 kubelet[1882]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:13:32.980302 kubelet[1882]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:13:32.980302 kubelet[1882]: rm /hostbin/cilium-mount Dec 13 04:13:32.980302 kubelet[1882]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7mff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-5rtmt_kube-system(b5098419-cfbd-4f81-ae63-1cf152dffdba): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:13:32.980302 kubelet[1882]: > logger="UnhandledError" Dec 13 04:13:32.981560 kubelet[1882]: E1213 04:13:32.981462 1882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rtmt" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" Dec 13 04:13:33.850020 sshd[3584]: Accepted publickey for core from 172.24.4.1 port 45316 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:33.853024 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:33.865496 systemd[1]: Started session-22.scope. Dec 13 04:13:33.866977 systemd-logind[1133]: New session 22 of user core. Dec 13 04:13:33.873862 kubelet[1882]: I1213 04:13:33.873794 1882 scope.go:117] "RemoveContainer" containerID="9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f" Dec 13 04:13:33.874660 kubelet[1882]: I1213 04:13:33.874600 1882 scope.go:117] "RemoveContainer" containerID="9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f" Dec 13 04:13:33.883126 env[1138]: time="2024-12-13T04:13:33.882518111Z" level=info msg="RemoveContainer for \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\"" Dec 13 04:13:33.897993 env[1138]: time="2024-12-13T04:13:33.892396581Z" level=info msg="RemoveContainer for \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\"" Dec 13 04:13:33.897993 env[1138]: time="2024-12-13T04:13:33.892634228Z" level=error msg="RemoveContainer for \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\" failed" error="failed to set removing state for container \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\": container is already in removing state" Dec 13 04:13:33.897993 env[1138]: time="2024-12-13T04:13:33.897336077Z" level=info msg="RemoveContainer for \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\" returns successfully" Dec 13 04:13:33.898431 kubelet[1882]: E1213 04:13:33.894053 1882 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\": container is already in removing state" containerID="9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f" Dec 13 04:13:33.898431 kubelet[1882]: E1213 04:13:33.898079 1882 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"mount-cgroup\": rpc error: code = Unknown desc = failed to set removing state for container \"9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f\": container is already in removing state; Skipping pod \"cilium-5rtmt_kube-system(b5098419-cfbd-4f81-ae63-1cf152dffdba)\"" logger="UnhandledError" Dec 13 04:13:33.899906 kubelet[1882]: E1213 04:13:33.899842 1882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5rtmt_kube-system(b5098419-cfbd-4f81-ae63-1cf152dffdba)\"" pod="kube-system/cilium-5rtmt" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" Dec 13 04:13:34.883773 env[1138]: time="2024-12-13T04:13:34.882349404Z" level=info msg="StopPodSandbox for \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\"" Dec 13 04:13:34.883773 env[1138]: time="2024-12-13T04:13:34.882514455Z" level=info msg="Container to stop \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:13:34.894361 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8-shm.mount: Deactivated successfully. Dec 13 04:13:34.915517 systemd[1]: cri-containerd-2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8.scope: Deactivated successfully. Dec 13 04:13:34.917774 sshd[3584]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:34.942038 systemd[1]: Started sshd@22-172.24.4.188:22-172.24.4.1:52780.service. Dec 13 04:13:34.943629 systemd[1]: sshd@21-172.24.4.188:22-172.24.4.1:45316.service: Deactivated successfully. Dec 13 04:13:34.946331 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 04:13:34.951517 systemd-logind[1133]: Session 22 logged out. Waiting for processes to exit. Dec 13 04:13:34.957556 systemd-logind[1133]: Removed session 22. Dec 13 04:13:34.984776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8-rootfs.mount: Deactivated successfully. Dec 13 04:13:34.993954 env[1138]: time="2024-12-13T04:13:34.993875116Z" level=info msg="shim disconnected" id=2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8 Dec 13 04:13:34.994172 env[1138]: time="2024-12-13T04:13:34.993970045Z" level=warning msg="cleaning up after shim disconnected" id=2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8 namespace=k8s.io Dec 13 04:13:34.994172 env[1138]: time="2024-12-13T04:13:34.993997276Z" level=info msg="cleaning up dead shim" Dec 13 04:13:35.003542 env[1138]: time="2024-12-13T04:13:35.003461327Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\n" Dec 13 04:13:35.003942 env[1138]: time="2024-12-13T04:13:35.003913127Z" level=info msg="TearDown network for sandbox \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\" successfully" Dec 13 04:13:35.004015 env[1138]: time="2024-12-13T04:13:35.003944206Z" level=info msg="StopPodSandbox for \"2ecf5b981b2028fde0144b2abba78b63fa051b41b95d821aab5e4cbf3c480cf8\" returns successfully" Dec 13 04:13:35.141338 kubelet[1882]: I1213 04:13:35.141045 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-net\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.141338 kubelet[1882]: I1213 04:13:35.141126 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cni-path\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.141338 kubelet[1882]: I1213 04:13:35.141180 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-config-path\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.141338 kubelet[1882]: I1213 04:13:35.141256 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-hostproc\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.143781 kubelet[1882]: I1213 04:13:35.141301 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-run\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.143964 kubelet[1882]: I1213 04:13:35.143890 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-kernel\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144073 kubelet[1882]: I1213 04:13:35.143975 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-xtables-lock\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144073 kubelet[1882]: I1213 04:13:35.144027 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7mff\" (UniqueName: \"kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-kube-api-access-g7mff\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144263 kubelet[1882]: I1213 04:13:35.144074 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-ipsec-secrets\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144263 kubelet[1882]: I1213 04:13:35.144116 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-clustermesh-secrets\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144263 kubelet[1882]: I1213 04:13:35.144155 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-bpf-maps\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144263 kubelet[1882]: I1213 04:13:35.144196 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-hubble-tls\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.144265 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-etc-cni-netd\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.144306 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-lib-modules\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.144346 1882 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-cgroup\") pod \"b5098419-cfbd-4f81-ae63-1cf152dffdba\" (UID: \"b5098419-cfbd-4f81-ae63-1cf152dffdba\") " Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.142567 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.142665 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.144448 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.144550 kubelet[1882]: I1213 04:13:35.144523 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.145010 kubelet[1882]: I1213 04:13:35.144569 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.145321 kubelet[1882]: I1213 04:13:35.145191 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.146063 kubelet[1882]: I1213 04:13:35.146021 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.150646 kubelet[1882]: I1213 04:13:35.150594 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.156885 systemd[1]: var-lib-kubelet-pods-b5098419\x2dcfbd\x2d4f81\x2dae63\x2d1cf152dffdba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg7mff.mount: Deactivated successfully. Dec 13 04:13:35.160710 kubelet[1882]: I1213 04:13:35.150936 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.160710 kubelet[1882]: I1213 04:13:35.159534 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:13:35.160710 kubelet[1882]: I1213 04:13:35.160600 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:13:35.161412 kubelet[1882]: I1213 04:13:35.161337 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-kube-api-access-g7mff" (OuterVolumeSpecName: "kube-api-access-g7mff") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "kube-api-access-g7mff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:35.170193 systemd[1]: var-lib-kubelet-pods-b5098419\x2dcfbd\x2d4f81\x2dae63\x2d1cf152dffdba-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 04:13:35.172923 kubelet[1882]: I1213 04:13:35.172853 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:13:35.179765 systemd[1]: var-lib-kubelet-pods-b5098419\x2dcfbd\x2d4f81\x2dae63\x2d1cf152dffdba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:13:35.182886 kubelet[1882]: I1213 04:13:35.182826 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:13:35.185511 kubelet[1882]: I1213 04:13:35.185459 1882 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5098419-cfbd-4f81-ae63-1cf152dffdba" (UID: "b5098419-cfbd-4f81-ae63-1cf152dffdba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:13:35.239672 systemd[1]: Removed slice kubepods-burstable-podb5098419_cfbd_4f81_ae63_1cf152dffdba.slice. Dec 13 04:13:35.245544 kubelet[1882]: I1213 04:13:35.245492 1882 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-etc-cni-netd\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.245818 kubelet[1882]: I1213 04:13:35.245789 1882 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-lib-modules\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.245983 kubelet[1882]: I1213 04:13:35.245956 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-cgroup\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.246146 kubelet[1882]: I1213 04:13:35.246117 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-config-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.246365 kubelet[1882]: I1213 04:13:35.246337 1882 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-hostproc\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.246552 kubelet[1882]: I1213 04:13:35.246522 1882 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-net\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.246777 kubelet[1882]: I1213 04:13:35.246746 1882 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cni-path\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.246947 kubelet[1882]: I1213 04:13:35.246918 1882 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-xtables-lock\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.247102 kubelet[1882]: I1213 04:13:35.247076 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-run\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.247439 kubelet[1882]: I1213 04:13:35.247407 1882 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-host-proc-sys-kernel\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.247652 kubelet[1882]: I1213 04:13:35.247619 1882 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g7mff\" (UniqueName: \"kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-kube-api-access-g7mff\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.247824 kubelet[1882]: I1213 04:13:35.247796 1882 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-clustermesh-secrets\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.247986 kubelet[1882]: I1213 04:13:35.247959 1882 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b5098419-cfbd-4f81-ae63-1cf152dffdba-cilium-ipsec-secrets\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.248141 kubelet[1882]: I1213 04:13:35.248115 1882 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5098419-cfbd-4f81-ae63-1cf152dffdba-bpf-maps\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.248395 kubelet[1882]: I1213 04:13:35.248355 1882 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5098419-cfbd-4f81-ae63-1cf152dffdba-hubble-tls\") on node \"ci-3510-3-6-e-a81afd2c25.novalocal\" DevicePath \"\"" Dec 13 04:13:35.807915 kubelet[1882]: W1213 04:13:35.807797 1882 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5098419_cfbd_4f81_ae63_1cf152dffdba.slice/cri-containerd-9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f.scope WatchSource:0}: container "9fa70b306eef5b6a0438915a46ff4adbe70ce76f13eae845d6df86d8b4a7810f" in namespace "k8s.io": not found Dec 13 04:13:35.894171 kubelet[1882]: I1213 04:13:35.888318 1882 scope.go:117] "RemoveContainer" containerID="5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0" Dec 13 04:13:35.894434 env[1138]: time="2024-12-13T04:13:35.893685629Z" level=info msg="RemoveContainer for \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\"" Dec 13 04:13:35.890820 systemd[1]: var-lib-kubelet-pods-b5098419\x2dcfbd\x2d4f81\x2dae63\x2d1cf152dffdba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:13:35.908164 env[1138]: time="2024-12-13T04:13:35.907551514Z" level=info msg="RemoveContainer for \"5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0\" returns successfully" Dec 13 04:13:35.993573 kubelet[1882]: E1213 04:13:35.993508 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" containerName="mount-cgroup" Dec 13 04:13:35.993787 kubelet[1882]: I1213 04:13:35.993638 1882 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" containerName="mount-cgroup" Dec 13 04:13:35.993787 kubelet[1882]: E1213 04:13:35.993694 1882 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" containerName="mount-cgroup" Dec 13 04:13:35.993787 kubelet[1882]: I1213 04:13:35.993736 1882 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" containerName="mount-cgroup" Dec 13 04:13:36.005304 systemd[1]: Created slice kubepods-burstable-pod1c01963e_ba6e_488c_9482_d04448baa802.slice. Dec 13 04:13:36.054036 kubelet[1882]: I1213 04:13:36.053997 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-xtables-lock\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054052 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-lib-modules\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054078 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c01963e-ba6e-488c-9482-d04448baa802-cilium-config-path\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054096 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-host-proc-sys-kernel\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054114 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c01963e-ba6e-488c-9482-d04448baa802-hubble-tls\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054131 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-cni-path\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054148 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-cilium-cgroup\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054165 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-host-proc-sys-net\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054183 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-cilium-run\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054211 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-hostproc\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054238 kubelet[1882]: I1213 04:13:36.054231 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-etc-cni-netd\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054600 kubelet[1882]: I1213 04:13:36.054249 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1c01963e-ba6e-488c-9482-d04448baa802-cilium-ipsec-secrets\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054600 kubelet[1882]: I1213 04:13:36.054269 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c01963e-ba6e-488c-9482-d04448baa802-bpf-maps\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054600 kubelet[1882]: I1213 04:13:36.054295 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c01963e-ba6e-488c-9482-d04448baa802-clustermesh-secrets\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.054600 kubelet[1882]: I1213 04:13:36.054313 1882 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6m5\" (UniqueName: \"kubernetes.io/projected/1c01963e-ba6e-488c-9482-d04448baa802-kube-api-access-bd6m5\") pod \"cilium-mfzm9\" (UID: \"1c01963e-ba6e-488c-9482-d04448baa802\") " pod="kube-system/cilium-mfzm9" Dec 13 04:13:36.242147 sshd[3724]: Accepted publickey for core from 172.24.4.1 port 52780 ssh2: RSA SHA256:i/IC0j0y8y59VaoiLkU9hl7M0K2qZ9B1gqKErvsmQpM Dec 13 04:13:36.242656 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:13:36.250435 systemd[1]: Started session-23.scope. Dec 13 04:13:36.250797 systemd-logind[1133]: New session 23 of user core. Dec 13 04:13:36.312019 env[1138]: time="2024-12-13T04:13:36.311952727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfzm9,Uid:1c01963e-ba6e-488c-9482-d04448baa802,Namespace:kube-system,Attempt:0,}" Dec 13 04:13:36.330234 env[1138]: time="2024-12-13T04:13:36.329557539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:13:36.330234 env[1138]: time="2024-12-13T04:13:36.329595821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:13:36.330234 env[1138]: time="2024-12-13T04:13:36.329608935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:13:36.330234 env[1138]: time="2024-12-13T04:13:36.329717951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b pid=3766 runtime=io.containerd.runc.v2 Dec 13 04:13:36.348834 systemd[1]: Started cri-containerd-b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b.scope. Dec 13 04:13:36.386664 env[1138]: time="2024-12-13T04:13:36.385496291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfzm9,Uid:1c01963e-ba6e-488c-9482-d04448baa802,Namespace:kube-system,Attempt:0,} returns sandbox id \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\"" Dec 13 04:13:36.388873 env[1138]: time="2024-12-13T04:13:36.388830886Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:13:36.412918 env[1138]: time="2024-12-13T04:13:36.412852494Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866\"" Dec 13 04:13:36.414008 env[1138]: time="2024-12-13T04:13:36.413952594Z" level=info msg="StartContainer for \"799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866\"" Dec 13 04:13:36.444552 systemd[1]: Started cri-containerd-799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866.scope. Dec 13 04:13:36.512938 env[1138]: time="2024-12-13T04:13:36.512795119Z" level=info msg="StartContainer for \"799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866\" returns successfully" Dec 13 04:13:36.527854 systemd[1]: cri-containerd-799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866.scope: Deactivated successfully. Dec 13 04:13:36.568553 env[1138]: time="2024-12-13T04:13:36.568475736Z" level=info msg="shim disconnected" id=799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866 Dec 13 04:13:36.568553 env[1138]: time="2024-12-13T04:13:36.568540848Z" level=warning msg="cleaning up after shim disconnected" id=799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866 namespace=k8s.io Dec 13 04:13:36.568553 env[1138]: time="2024-12-13T04:13:36.568553241Z" level=info msg="cleaning up dead shim" Dec 13 04:13:36.577412 env[1138]: time="2024-12-13T04:13:36.577363572Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3854 runtime=io.containerd.runc.v2\n" Dec 13 04:13:36.898461 env[1138]: time="2024-12-13T04:13:36.898296876Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:13:36.934042 env[1138]: time="2024-12-13T04:13:36.933985840Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466\"" Dec 13 04:13:36.935047 env[1138]: time="2024-12-13T04:13:36.935023473Z" level=info msg="StartContainer for \"5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466\"" Dec 13 04:13:36.961275 systemd[1]: Started cri-containerd-5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466.scope. Dec 13 04:13:37.016500 env[1138]: time="2024-12-13T04:13:37.016434041Z" level=info msg="StartContainer for \"5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466\" returns successfully" Dec 13 04:13:37.025245 systemd[1]: cri-containerd-5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466.scope: Deactivated successfully. Dec 13 04:13:37.050612 env[1138]: time="2024-12-13T04:13:37.050559574Z" level=info msg="shim disconnected" id=5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466 Dec 13 04:13:37.050899 env[1138]: time="2024-12-13T04:13:37.050881380Z" level=warning msg="cleaning up after shim disconnected" id=5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466 namespace=k8s.io Dec 13 04:13:37.050976 env[1138]: time="2024-12-13T04:13:37.050961119Z" level=info msg="cleaning up dead shim" Dec 13 04:13:37.065274 env[1138]: time="2024-12-13T04:13:37.064907556Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Dec 13 04:13:37.231325 kubelet[1882]: I1213 04:13:37.231243 1882 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5098419-cfbd-4f81-ae63-1cf152dffdba" path="/var/lib/kubelet/pods/b5098419-cfbd-4f81-ae63-1cf152dffdba/volumes" Dec 13 04:13:37.402258 kubelet[1882]: E1213 04:13:37.402159 1882 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:13:37.891575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466-rootfs.mount: Deactivated successfully. Dec 13 04:13:37.918369 env[1138]: time="2024-12-13T04:13:37.918043515Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:13:37.954628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576084884.mount: Deactivated successfully. Dec 13 04:13:37.972012 env[1138]: time="2024-12-13T04:13:37.971962217Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589\"" Dec 13 04:13:37.972927 env[1138]: time="2024-12-13T04:13:37.972877038Z" level=info msg="StartContainer for \"29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589\"" Dec 13 04:13:38.002495 systemd[1]: Started cri-containerd-29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589.scope. Dec 13 04:13:38.040376 env[1138]: time="2024-12-13T04:13:38.040322206Z" level=info msg="StartContainer for \"29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589\" returns successfully" Dec 13 04:13:38.060476 systemd[1]: cri-containerd-29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589.scope: Deactivated successfully. Dec 13 04:13:38.094136 env[1138]: time="2024-12-13T04:13:38.093707334Z" level=info msg="shim disconnected" id=29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589 Dec 13 04:13:38.095081 env[1138]: time="2024-12-13T04:13:38.095033870Z" level=warning msg="cleaning up after shim disconnected" id=29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589 namespace=k8s.io Dec 13 04:13:38.095299 env[1138]: time="2024-12-13T04:13:38.095262321Z" level=info msg="cleaning up dead shim" Dec 13 04:13:38.110731 env[1138]: time="2024-12-13T04:13:38.110659547Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n" Dec 13 04:13:38.892290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589-rootfs.mount: Deactivated successfully. Dec 13 04:13:38.919270 kubelet[1882]: W1213 04:13:38.918680 1882 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5098419_cfbd_4f81_ae63_1cf152dffdba.slice/cri-containerd-5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0.scope WatchSource:0}: container "5ed717c225467da3e0276098c12af07b0ca101d973d34acaf3ec500d390f0dd0" in namespace "k8s.io": not found Dec 13 04:13:38.935258 env[1138]: time="2024-12-13T04:13:38.929698194Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:13:38.986292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224213923.mount: Deactivated successfully. Dec 13 04:13:38.991974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022979094.mount: Deactivated successfully. Dec 13 04:13:39.001439 env[1138]: time="2024-12-13T04:13:39.001376648Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf\"" Dec 13 04:13:39.003407 env[1138]: time="2024-12-13T04:13:39.003356143Z" level=info msg="StartContainer for \"67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf\"" Dec 13 04:13:39.040174 systemd[1]: Started cri-containerd-67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf.scope. Dec 13 04:13:39.074756 systemd[1]: cri-containerd-67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf.scope: Deactivated successfully. Dec 13 04:13:39.079985 env[1138]: time="2024-12-13T04:13:39.079929390Z" level=info msg="StartContainer for \"67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf\" returns successfully" Dec 13 04:13:39.117809 env[1138]: time="2024-12-13T04:13:39.117729148Z" level=info msg="shim disconnected" id=67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf Dec 13 04:13:39.117809 env[1138]: time="2024-12-13T04:13:39.117800222Z" level=warning msg="cleaning up after shim disconnected" id=67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf namespace=k8s.io Dec 13 04:13:39.117809 env[1138]: time="2024-12-13T04:13:39.117812194Z" level=info msg="cleaning up dead shim" Dec 13 04:13:39.126100 env[1138]: time="2024-12-13T04:13:39.126045359Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:13:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4043 runtime=io.containerd.runc.v2\n" Dec 13 04:13:39.937997 env[1138]: time="2024-12-13T04:13:39.937880776Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:13:39.994346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000568232.mount: Deactivated successfully. Dec 13 04:13:40.018009 env[1138]: time="2024-12-13T04:13:40.017969088Z" level=info msg="CreateContainer within sandbox \"b505fbef4bbcb49ebf7824ce87a6dbeb265abca0b6bc2bfc892822a75389df9b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb\"" Dec 13 04:13:40.018805 env[1138]: time="2024-12-13T04:13:40.018758534Z" level=info msg="StartContainer for \"c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb\"" Dec 13 04:13:40.039991 systemd[1]: Started cri-containerd-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb.scope. Dec 13 04:13:40.088449 env[1138]: time="2024-12-13T04:13:40.088409575Z" level=info msg="StartContainer for \"c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb\" returns successfully" Dec 13 04:13:40.988278 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:13:41.077296 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 04:13:41.184247 systemd[1]: run-containerd-runc-k8s.io-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb-runc.mtzdLP.mount: Deactivated successfully. Dec 13 04:13:42.058769 kubelet[1882]: W1213 04:13:42.058714 1882 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c01963e_ba6e_488c_9482_d04448baa802.slice/cri-containerd-799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866.scope WatchSource:0}: task 799a442907c633db7fbb4413097b499e1be28593b4f417973181111e29bc6866 not found: not found Dec 13 04:13:43.409494 systemd[1]: run-containerd-runc-k8s.io-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb-runc.61a1lz.mount: Deactivated successfully. Dec 13 04:13:44.336092 systemd-networkd[979]: lxc_health: Link UP Dec 13 04:13:44.354157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:13:44.353294 systemd-networkd[979]: lxc_health: Gained carrier Dec 13 04:13:44.385176 kubelet[1882]: I1213 04:13:44.385103 1882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mfzm9" podStartSLOduration=9.385082678 podStartE2EDuration="9.385082678s" podCreationTimestamp="2024-12-13 04:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:13:40.964889946 +0000 UTC m=+153.895368777" watchObservedRunningTime="2024-12-13 04:13:44.385082678 +0000 UTC m=+157.315561499" Dec 13 04:13:45.170637 kubelet[1882]: W1213 04:13:45.170582 1882 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c01963e_ba6e_488c_9482_d04448baa802.slice/cri-containerd-5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466.scope WatchSource:0}: task 5319001a9d74ddbe65f665518f1ded249795d2b3d93576133b3f5e8b17a55466 not found: not found Dec 13 04:13:45.478593 systemd-networkd[979]: lxc_health: Gained IPv6LL Dec 13 04:13:45.739938 systemd[1]: run-containerd-runc-k8s.io-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb-runc.zb9dTu.mount: Deactivated successfully. Dec 13 04:13:47.987571 systemd[1]: run-containerd-runc-k8s.io-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb-runc.xVh07i.mount: Deactivated successfully. Dec 13 04:13:48.278614 kubelet[1882]: W1213 04:13:48.278441 1882 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c01963e_ba6e_488c_9482_d04448baa802.slice/cri-containerd-29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589.scope WatchSource:0}: task 29f697f426ce4994cc93f0de809274c42e9ec9f4717883417a67ebf4626b4589 not found: not found Dec 13 04:13:50.201624 systemd[1]: run-containerd-runc-k8s.io-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb-runc.hw9bVZ.mount: Deactivated successfully. Dec 13 04:13:51.388352 kubelet[1882]: W1213 04:13:51.388251 1882 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c01963e_ba6e_488c_9482_d04448baa802.slice/cri-containerd-67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf.scope WatchSource:0}: task 67328251342cac237e4cb14c61bcbbf8ce2850a88b8b907880713d0583dd4ebf not found: not found Dec 13 04:13:52.414559 systemd[1]: run-containerd-runc-k8s.io-c1f258d2dd8e4e7cf5d39a11f7b5460e6cefa829a6b1a9eff9a0ba8228bf96eb-runc.VEnkVX.mount: Deactivated successfully. Dec 13 04:13:52.831247 sshd[3724]: pam_unix(sshd:session): session closed for user core Dec 13 04:13:52.838069 systemd[1]: sshd@22-172.24.4.188:22-172.24.4.1:52780.service: Deactivated successfully. Dec 13 04:13:52.839772 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 04:13:52.841527 systemd-logind[1133]: Session 23 logged out. Waiting for processes to exit. Dec 13 04:13:52.843607 systemd-logind[1133]: Removed session 23.