Dec 13 03:59:40.911425 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 03:59:40.911447 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:59:40.911460 kernel: BIOS-provided physical RAM map: Dec 13 03:59:40.911467 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 03:59:40.911474 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 03:59:40.911481 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 03:59:40.911489 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 03:59:40.911496 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 03:59:40.911504 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 03:59:40.911511 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 03:59:40.911518 kernel: NX (Execute Disable) protection: active Dec 13 03:59:40.911524 kernel: SMBIOS 2.8 present. Dec 13 03:59:40.911531 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 03:59:40.911538 kernel: Hypervisor detected: KVM Dec 13 03:59:40.911546 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 03:59:40.911555 kernel: kvm-clock: cpu 0, msr 3219b001, primary cpu clock Dec 13 03:59:40.911562 kernel: kvm-clock: using sched offset of 5552419820 cycles Dec 13 03:59:40.911570 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 03:59:40.911578 kernel: tsc: Detected 1996.249 MHz processor Dec 13 03:59:40.911585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 03:59:40.911593 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 03:59:40.911601 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 03:59:40.911608 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 03:59:40.911618 kernel: ACPI: Early table checksum verification disabled Dec 13 03:59:40.911625 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 03:59:40.911633 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:59:40.911640 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:59:40.911648 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:59:40.911655 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 03:59:40.911662 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:59:40.911670 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 03:59:40.911677 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 03:59:40.911686 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 03:59:40.911694 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 03:59:40.911701 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 03:59:40.911708 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 03:59:40.911715 kernel: No NUMA configuration found Dec 13 03:59:40.911722 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 03:59:40.911729 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 03:59:40.911737 kernel: Zone ranges: Dec 13 03:59:40.911750 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 03:59:40.911758 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 03:59:40.911765 kernel: Normal empty Dec 13 03:59:40.911773 kernel: Movable zone start for each node Dec 13 03:59:40.911781 kernel: Early memory node ranges Dec 13 03:59:40.911789 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 03:59:40.911798 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 03:59:40.915863 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 03:59:40.915872 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 03:59:40.915880 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 03:59:40.915888 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 03:59:40.915896 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 03:59:40.915904 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 03:59:40.915911 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 03:59:40.915919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 03:59:40.915927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 03:59:40.915939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 03:59:40.915947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 03:59:40.915955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 03:59:40.915962 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 03:59:40.915970 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 03:59:40.915977 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 03:59:40.915985 kernel: Booting paravirtualized kernel on KVM Dec 13 03:59:40.915993 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 03:59:40.916001 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 03:59:40.916010 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 03:59:40.916018 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 03:59:40.916025 kernel: pcpu-alloc: [0] 0 1 Dec 13 03:59:40.916033 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 03:59:40.916041 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 03:59:40.916048 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 03:59:40.916055 kernel: Policy zone: DMA32 Dec 13 03:59:40.916065 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:59:40.916075 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 03:59:40.916082 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:59:40.916090 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 03:59:40.916098 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 03:59:40.916106 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123076K reserved, 0K cma-reserved) Dec 13 03:59:40.916113 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 03:59:40.916121 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 03:59:40.916128 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 03:59:40.916142 kernel: rcu: Hierarchical RCU implementation. Dec 13 03:59:40.916151 kernel: rcu: RCU event tracing is enabled. Dec 13 03:59:40.916158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 03:59:40.916166 kernel: Rude variant of Tasks RCU enabled. Dec 13 03:59:40.916174 kernel: Tracing variant of Tasks RCU enabled. Dec 13 03:59:40.916182 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 03:59:40.916189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 03:59:40.916197 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 03:59:40.916204 kernel: Console: colour VGA+ 80x25 Dec 13 03:59:40.916213 kernel: printk: console [tty0] enabled Dec 13 03:59:40.916221 kernel: printk: console [ttyS0] enabled Dec 13 03:59:40.916229 kernel: ACPI: Core revision 20210730 Dec 13 03:59:40.916236 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 03:59:40.916244 kernel: x2apic enabled Dec 13 03:59:40.916251 kernel: Switched APIC routing to physical x2apic. Dec 13 03:59:40.916259 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 03:59:40.916266 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 03:59:40.916274 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 03:59:40.916282 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 03:59:40.916291 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 03:59:40.916299 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 03:59:40.916307 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 03:59:40.916314 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 03:59:40.916322 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 03:59:40.916330 kernel: Speculative Store Bypass: Vulnerable Dec 13 03:59:40.916337 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 03:59:40.916345 kernel: Freeing SMP alternatives memory: 32K Dec 13 03:59:40.916352 kernel: pid_max: default: 32768 minimum: 301 Dec 13 03:59:40.916361 kernel: LSM: Security Framework initializing Dec 13 03:59:40.916369 kernel: SELinux: Initializing. Dec 13 03:59:40.916376 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 03:59:40.916384 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 03:59:40.916392 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 03:59:40.916399 kernel: Performance Events: AMD PMU driver. Dec 13 03:59:40.916407 kernel: ... version: 0 Dec 13 03:59:40.916414 kernel: ... bit width: 48 Dec 13 03:59:40.916422 kernel: ... generic registers: 4 Dec 13 03:59:40.916437 kernel: ... value mask: 0000ffffffffffff Dec 13 03:59:40.916445 kernel: ... max period: 00007fffffffffff Dec 13 03:59:40.916453 kernel: ... fixed-purpose events: 0 Dec 13 03:59:40.916463 kernel: ... event mask: 000000000000000f Dec 13 03:59:40.916471 kernel: signal: max sigframe size: 1440 Dec 13 03:59:40.916478 kernel: rcu: Hierarchical SRCU implementation. Dec 13 03:59:40.916486 kernel: smp: Bringing up secondary CPUs ... Dec 13 03:59:40.916494 kernel: x86: Booting SMP configuration: Dec 13 03:59:40.916504 kernel: .... node #0, CPUs: #1 Dec 13 03:59:40.916512 kernel: kvm-clock: cpu 1, msr 3219b041, secondary cpu clock Dec 13 03:59:40.916520 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 03:59:40.916528 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 03:59:40.916536 kernel: smpboot: Max logical packages: 2 Dec 13 03:59:40.916544 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 03:59:40.916552 kernel: devtmpfs: initialized Dec 13 03:59:40.916560 kernel: x86/mm: Memory block size: 128MB Dec 13 03:59:40.916568 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 03:59:40.916578 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 03:59:40.916586 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 03:59:40.916593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 03:59:40.916601 kernel: audit: initializing netlink subsys (disabled) Dec 13 03:59:40.916609 kernel: audit: type=2000 audit(1734062380.932:1): state=initialized audit_enabled=0 res=1 Dec 13 03:59:40.916617 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 03:59:40.916625 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 03:59:40.916633 kernel: cpuidle: using governor menu Dec 13 03:59:40.916641 kernel: ACPI: bus type PCI registered Dec 13 03:59:40.916651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 03:59:40.916659 kernel: dca service started, version 1.12.1 Dec 13 03:59:40.916666 kernel: PCI: Using configuration type 1 for base access Dec 13 03:59:40.916675 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 03:59:40.916683 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 03:59:40.916690 kernel: ACPI: Added _OSI(Module Device) Dec 13 03:59:40.916698 kernel: ACPI: Added _OSI(Processor Device) Dec 13 03:59:40.916706 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 03:59:40.916714 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 03:59:40.916724 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 03:59:40.916732 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 03:59:40.916739 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 03:59:40.916989 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 03:59:40.917015 kernel: ACPI: Interpreter enabled Dec 13 03:59:40.917037 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 03:59:40.917059 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 03:59:40.917081 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 03:59:40.917102 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 03:59:40.917140 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 03:59:40.917526 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 03:59:40.917796 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 03:59:40.917864 kernel: acpiphp: Slot [3] registered Dec 13 03:59:40.917885 kernel: acpiphp: Slot [4] registered Dec 13 03:59:40.917905 kernel: acpiphp: Slot [5] registered Dec 13 03:59:40.917925 kernel: acpiphp: Slot [6] registered Dec 13 03:59:40.917953 kernel: acpiphp: Slot [7] registered Dec 13 03:59:40.917973 kernel: acpiphp: Slot [8] registered Dec 13 03:59:40.917992 kernel: acpiphp: Slot [9] registered Dec 13 03:59:40.918012 kernel: acpiphp: Slot [10] registered Dec 13 03:59:40.918033 kernel: acpiphp: Slot [11] registered Dec 13 03:59:40.918053 kernel: acpiphp: Slot [12] registered Dec 13 03:59:40.918073 kernel: acpiphp: Slot [13] registered Dec 13 03:59:40.918093 kernel: acpiphp: Slot [14] registered Dec 13 03:59:40.918112 kernel: acpiphp: Slot [15] registered Dec 13 03:59:40.918132 kernel: acpiphp: Slot [16] registered Dec 13 03:59:40.918156 kernel: acpiphp: Slot [17] registered Dec 13 03:59:40.918176 kernel: acpiphp: Slot [18] registered Dec 13 03:59:40.918195 kernel: acpiphp: Slot [19] registered Dec 13 03:59:40.918215 kernel: acpiphp: Slot [20] registered Dec 13 03:59:40.918235 kernel: acpiphp: Slot [21] registered Dec 13 03:59:40.918255 kernel: acpiphp: Slot [22] registered Dec 13 03:59:40.918275 kernel: acpiphp: Slot [23] registered Dec 13 03:59:40.918294 kernel: acpiphp: Slot [24] registered Dec 13 03:59:40.918314 kernel: acpiphp: Slot [25] registered Dec 13 03:59:40.918337 kernel: acpiphp: Slot [26] registered Dec 13 03:59:40.918357 kernel: acpiphp: Slot [27] registered Dec 13 03:59:40.918377 kernel: acpiphp: Slot [28] registered Dec 13 03:59:40.918396 kernel: acpiphp: Slot [29] registered Dec 13 03:59:40.918416 kernel: acpiphp: Slot [30] registered Dec 13 03:59:40.918436 kernel: acpiphp: Slot [31] registered Dec 13 03:59:40.918456 kernel: PCI host bridge to bus 0000:00 Dec 13 03:59:40.918679 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 03:59:40.918905 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 03:59:40.919099 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 03:59:40.919277 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 03:59:40.919457 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 03:59:40.919654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 03:59:40.923005 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 03:59:40.923249 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 03:59:40.923485 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 03:59:40.923691 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 03:59:40.926993 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 03:59:40.927214 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 03:59:40.927418 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 03:59:40.927620 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 03:59:40.927873 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 03:59:40.928102 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 03:59:40.928307 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 03:59:40.928525 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 03:59:40.928733 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 03:59:40.928982 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 03:59:40.929187 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 03:59:40.929405 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 03:59:40.929613 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 03:59:40.932134 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 03:59:40.932381 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 03:59:40.932588 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 03:59:40.932793 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 03:59:40.933167 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 03:59:40.933423 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 03:59:40.933702 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 03:59:40.934006 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 03:59:40.934213 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 03:59:40.934429 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 03:59:40.934633 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 03:59:40.936961 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 03:59:40.937262 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 03:59:40.937472 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 03:59:40.937674 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 03:59:40.937705 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 03:59:40.937727 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 03:59:40.937773 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 03:59:40.939065 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 03:59:40.939129 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 03:59:40.939178 kernel: iommu: Default domain type: Translated Dec 13 03:59:40.939200 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 03:59:40.939465 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 03:59:40.939673 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 03:59:40.939965 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 03:59:40.940001 kernel: vgaarb: loaded Dec 13 03:59:40.940022 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 03:59:40.940043 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 03:59:40.940064 kernel: PTP clock support registered Dec 13 03:59:40.940092 kernel: PCI: Using ACPI for IRQ routing Dec 13 03:59:40.940112 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 03:59:40.940133 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 03:59:40.940153 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 03:59:40.940173 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 03:59:40.940194 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 03:59:40.940214 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 03:59:40.940234 kernel: pnp: PnP ACPI init Dec 13 03:59:40.940455 kernel: pnp 00:03: [dma 2] Dec 13 03:59:40.940496 kernel: pnp: PnP ACPI: found 5 devices Dec 13 03:59:40.940516 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 03:59:40.940537 kernel: NET: Registered PF_INET protocol family Dec 13 03:59:40.940558 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 03:59:40.940578 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 03:59:40.940598 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 03:59:40.940619 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 03:59:40.940640 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 03:59:40.940663 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 03:59:40.940684 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 03:59:40.940704 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 03:59:40.940724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 03:59:40.940745 kernel: NET: Registered PF_XDP protocol family Dec 13 03:59:40.945053 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 03:59:40.945272 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 03:59:40.945453 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 03:59:40.945630 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 03:59:40.945908 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 03:59:40.946119 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 03:59:40.946325 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 03:59:40.946526 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 03:59:40.946556 kernel: PCI: CLS 0 bytes, default 64 Dec 13 03:59:40.946577 kernel: Initialise system trusted keyrings Dec 13 03:59:40.946598 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 03:59:40.946618 kernel: Key type asymmetric registered Dec 13 03:59:40.946662 kernel: Asymmetric key parser 'x509' registered Dec 13 03:59:40.946682 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 03:59:40.946703 kernel: io scheduler mq-deadline registered Dec 13 03:59:40.946723 kernel: io scheduler kyber registered Dec 13 03:59:40.946744 kernel: io scheduler bfq registered Dec 13 03:59:40.946764 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 03:59:40.946785 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 03:59:40.946836 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 03:59:40.946858 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 03:59:40.946883 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 03:59:40.946903 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 03:59:40.946924 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 03:59:40.946944 kernel: random: crng init done Dec 13 03:59:40.946965 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 03:59:40.946985 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 03:59:40.947005 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 03:59:40.947239 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 03:59:40.947281 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 03:59:40.947463 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 03:59:40.947612 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T03:59:40 UTC (1734062380) Dec 13 03:59:40.947748 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 03:59:40.947771 kernel: NET: Registered PF_INET6 protocol family Dec 13 03:59:40.947786 kernel: Segment Routing with IPv6 Dec 13 03:59:40.949831 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 03:59:40.949860 kernel: NET: Registered PF_PACKET protocol family Dec 13 03:59:40.949877 kernel: Key type dns_resolver registered Dec 13 03:59:40.949897 kernel: IPI shorthand broadcast: enabled Dec 13 03:59:40.949913 kernel: sched_clock: Marking stable (732142144, 140887963)->(897213564, -24183457) Dec 13 03:59:40.949928 kernel: registered taskstats version 1 Dec 13 03:59:40.949943 kernel: Loading compiled-in X.509 certificates Dec 13 03:59:40.949958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 03:59:40.949973 kernel: Key type .fscrypt registered Dec 13 03:59:40.949988 kernel: Key type fscrypt-provisioning registered Dec 13 03:59:40.950004 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 03:59:40.950022 kernel: ima: Allocated hash algorithm: sha1 Dec 13 03:59:40.950037 kernel: ima: No architecture policies found Dec 13 03:59:40.950052 kernel: clk: Disabling unused clocks Dec 13 03:59:40.950067 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 03:59:40.950083 kernel: Write protecting the kernel read-only data: 28672k Dec 13 03:59:40.950098 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 03:59:40.950113 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 03:59:40.950128 kernel: Run /init as init process Dec 13 03:59:40.950144 kernel: with arguments: Dec 13 03:59:40.950158 kernel: /init Dec 13 03:59:40.950177 kernel: with environment: Dec 13 03:59:40.950191 kernel: HOME=/ Dec 13 03:59:40.950206 kernel: TERM=linux Dec 13 03:59:40.950221 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 03:59:40.950242 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:59:40.950263 systemd[1]: Detected virtualization kvm. Dec 13 03:59:40.950281 systemd[1]: Detected architecture x86-64. Dec 13 03:59:40.950301 systemd[1]: Running in initrd. Dec 13 03:59:40.950317 systemd[1]: No hostname configured, using default hostname. Dec 13 03:59:40.950333 systemd[1]: Hostname set to . Dec 13 03:59:40.950350 systemd[1]: Initializing machine ID from VM UUID. Dec 13 03:59:40.950366 systemd[1]: Queued start job for default target initrd.target. Dec 13 03:59:40.950383 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:59:40.950399 systemd[1]: Reached target cryptsetup.target. Dec 13 03:59:40.950416 systemd[1]: Reached target paths.target. Dec 13 03:59:40.950434 systemd[1]: Reached target slices.target. Dec 13 03:59:40.950450 systemd[1]: Reached target swap.target. Dec 13 03:59:40.950466 systemd[1]: Reached target timers.target. Dec 13 03:59:40.950483 systemd[1]: Listening on iscsid.socket. Dec 13 03:59:40.950499 systemd[1]: Listening on iscsiuio.socket. Dec 13 03:59:40.950515 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 03:59:40.950532 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 03:59:40.950548 systemd[1]: Listening on systemd-journald.socket. Dec 13 03:59:40.950567 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:59:40.950583 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:59:40.950600 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:59:40.950617 systemd[1]: Reached target sockets.target. Dec 13 03:59:40.950646 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:59:40.950665 systemd[1]: Finished network-cleanup.service. Dec 13 03:59:40.950684 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 03:59:40.950701 systemd[1]: Starting systemd-journald.service... Dec 13 03:59:40.950718 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:59:40.950735 systemd[1]: Starting systemd-resolved.service... Dec 13 03:59:40.950752 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 03:59:40.950768 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:59:40.950785 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 03:59:40.950857 systemd-journald[186]: Journal started Dec 13 03:59:40.950951 systemd-journald[186]: Runtime Journal (/run/log/journal/6084184725254990b2b426934b92433e) is 4.9M, max 39.5M, 34.5M free. Dec 13 03:59:40.920844 systemd-modules-load[187]: Inserted module 'overlay' Dec 13 03:59:40.978791 systemd[1]: Started systemd-journald.service. Dec 13 03:59:40.978840 kernel: audit: type=1130 audit(1734062380.970:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.959312 systemd-resolved[188]: Positive Trust Anchors: Dec 13 03:59:40.959323 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:59:40.998317 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 03:59:40.998347 kernel: audit: type=1130 audit(1734062380.972:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.998362 kernel: audit: type=1130 audit(1734062380.973:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.998376 kernel: Bridge firewalling registered Dec 13 03:59:40.998388 kernel: audit: type=1130 audit(1734062380.991:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.998401 kernel: audit: type=1130 audit(1734062380.996:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:40.959360 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:59:40.962145 systemd-resolved[188]: Defaulting to hostname 'linux'. Dec 13 03:59:40.973402 systemd[1]: Started systemd-resolved.service. Dec 13 03:59:40.973990 systemd[1]: Reached target nss-lookup.target. Dec 13 03:59:40.982941 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:59:40.991760 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 03:59:40.992400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:59:40.996707 systemd-modules-load[187]: Inserted module 'br_netfilter' Dec 13 03:59:40.998040 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 03:59:41.023827 kernel: SCSI subsystem initialized Dec 13 03:59:41.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.026086 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 03:59:41.030831 kernel: audit: type=1130 audit(1734062381.025:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.032994 systemd[1]: Starting dracut-cmdline.service... Dec 13 03:59:41.045888 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 03:59:41.045930 kernel: device-mapper: uevent: version 1.0.3 Dec 13 03:59:41.047822 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 03:59:41.050267 dracut-cmdline[203]: dracut-dracut-053 Dec 13 03:59:41.052120 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:59:41.054763 systemd-modules-load[187]: Inserted module 'dm_multipath' Dec 13 03:59:41.055618 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:59:41.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.057200 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:59:41.061369 kernel: audit: type=1130 audit(1734062381.055:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.068380 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:59:41.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.072841 kernel: audit: type=1130 audit(1734062381.068:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.126862 kernel: Loading iSCSI transport class v2.0-870. Dec 13 03:59:41.147843 kernel: iscsi: registered transport (tcp) Dec 13 03:59:41.174224 kernel: iscsi: registered transport (qla4xxx) Dec 13 03:59:41.174272 kernel: QLogic iSCSI HBA Driver Dec 13 03:59:41.227582 systemd[1]: Finished dracut-cmdline.service. Dec 13 03:59:41.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.230800 systemd[1]: Starting dracut-pre-udev.service... Dec 13 03:59:41.233896 kernel: audit: type=1130 audit(1734062381.227:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.330982 kernel: raid6: sse2x4 gen() 5266 MB/s Dec 13 03:59:41.347879 kernel: raid6: sse2x4 xor() 4582 MB/s Dec 13 03:59:41.364862 kernel: raid6: sse2x2 gen() 13946 MB/s Dec 13 03:59:41.381939 kernel: raid6: sse2x2 xor() 8598 MB/s Dec 13 03:59:41.398907 kernel: raid6: sse2x1 gen() 10472 MB/s Dec 13 03:59:41.416716 kernel: raid6: sse2x1 xor() 6766 MB/s Dec 13 03:59:41.416860 kernel: raid6: using algorithm sse2x2 gen() 13946 MB/s Dec 13 03:59:41.416901 kernel: raid6: .... xor() 8598 MB/s, rmw enabled Dec 13 03:59:41.417597 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 03:59:41.433439 kernel: xor: measuring software checksum speed Dec 13 03:59:41.433502 kernel: prefetch64-sse : 18358 MB/sec Dec 13 03:59:41.434446 kernel: generic_sse : 16726 MB/sec Dec 13 03:59:41.434486 kernel: xor: using function: prefetch64-sse (18358 MB/sec) Dec 13 03:59:41.551877 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 03:59:41.569450 systemd[1]: Finished dracut-pre-udev.service. Dec 13 03:59:41.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.569000 audit: BPF prog-id=7 op=LOAD Dec 13 03:59:41.570000 audit: BPF prog-id=8 op=LOAD Dec 13 03:59:41.571592 systemd[1]: Starting systemd-udevd.service... Dec 13 03:59:41.586160 systemd-udevd[386]: Using default interface naming scheme 'v252'. Dec 13 03:59:41.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.597975 systemd[1]: Started systemd-udevd.service. Dec 13 03:59:41.604338 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 03:59:41.620635 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 03:59:41.673930 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 03:59:41.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.677093 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:59:41.736035 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:59:41.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:41.811892 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 03:59:41.836180 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 03:59:41.836204 kernel: GPT:17805311 != 41943039 Dec 13 03:59:41.836216 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 03:59:41.836227 kernel: GPT:17805311 != 41943039 Dec 13 03:59:41.836238 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 03:59:41.836250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:59:41.863829 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) Dec 13 03:59:41.870838 kernel: libata version 3.00 loaded. Dec 13 03:59:41.875087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 03:59:41.920679 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 03:59:41.921093 kernel: scsi host0: ata_piix Dec 13 03:59:41.921353 kernel: scsi host1: ata_piix Dec 13 03:59:41.921605 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 03:59:41.921636 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 03:59:41.925305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 03:59:41.928954 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 03:59:41.929543 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 03:59:41.936787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:59:41.938567 systemd[1]: Starting disk-uuid.service... Dec 13 03:59:41.953927 disk-uuid[459]: Primary Header is updated. Dec 13 03:59:41.953927 disk-uuid[459]: Secondary Entries is updated. Dec 13 03:59:41.953927 disk-uuid[459]: Secondary Header is updated. Dec 13 03:59:41.962844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:59:41.972845 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:59:43.046888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 03:59:43.047374 disk-uuid[460]: The operation has completed successfully. Dec 13 03:59:43.485407 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 03:59:43.486308 systemd[1]: Finished disk-uuid.service. Dec 13 03:59:43.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:43.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:43.490151 systemd[1]: Starting verity-setup.service... Dec 13 03:59:43.583321 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 03:59:44.054343 systemd[1]: Found device dev-mapper-usr.device. Dec 13 03:59:44.057253 systemd[1]: Finished verity-setup.service. Dec 13 03:59:44.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.059674 systemd[1]: Mounting sysusr-usr.mount... Dec 13 03:59:44.205883 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 03:59:44.206553 systemd[1]: Mounted sysusr-usr.mount. Dec 13 03:59:44.207269 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 03:59:44.207958 systemd[1]: Starting ignition-setup.service... Dec 13 03:59:44.211353 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 03:59:44.227557 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:59:44.227606 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:59:44.227618 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:59:44.251697 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 03:59:44.265468 systemd[1]: Finished ignition-setup.service. Dec 13 03:59:44.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.269122 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 03:59:44.354794 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 03:59:44.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.355000 audit: BPF prog-id=9 op=LOAD Dec 13 03:59:44.357524 systemd[1]: Starting systemd-networkd.service... Dec 13 03:59:44.384024 systemd-networkd[633]: lo: Link UP Dec 13 03:59:44.384039 systemd-networkd[633]: lo: Gained carrier Dec 13 03:59:44.384848 systemd-networkd[633]: Enumeration completed Dec 13 03:59:44.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.384945 systemd[1]: Started systemd-networkd.service. Dec 13 03:59:44.385435 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:59:44.386421 systemd[1]: Reached target network.target. Dec 13 03:59:44.389130 systemd[1]: Starting iscsiuio.service... Dec 13 03:59:44.389187 systemd-networkd[633]: eth0: Link UP Dec 13 03:59:44.389191 systemd-networkd[633]: eth0: Gained carrier Dec 13 03:59:44.401346 systemd[1]: Started iscsiuio.service. Dec 13 03:59:44.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.403908 systemd[1]: Starting iscsid.service... Dec 13 03:59:44.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.414695 iscsid[642]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:59:44.414695 iscsid[642]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 03:59:44.414695 iscsid[642]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 03:59:44.414695 iscsid[642]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 03:59:44.414695 iscsid[642]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 03:59:44.414695 iscsid[642]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:59:44.414695 iscsid[642]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 03:59:44.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.406925 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.115/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 03:59:44.413950 systemd[1]: Started iscsid.service. Dec 13 03:59:44.415466 systemd[1]: Starting dracut-initqueue.service... Dec 13 03:59:44.428476 systemd[1]: Finished dracut-initqueue.service. Dec 13 03:59:44.431131 systemd[1]: Reached target remote-fs-pre.target. Dec 13 03:59:44.432087 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:59:44.433603 systemd[1]: Reached target remote-fs.target. Dec 13 03:59:44.436138 systemd[1]: Starting dracut-pre-mount.service... Dec 13 03:59:44.444673 systemd[1]: Finished dracut-pre-mount.service. Dec 13 03:59:44.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.558544 ignition[557]: Ignition 2.14.0 Dec 13 03:59:44.559965 ignition[557]: Stage: fetch-offline Dec 13 03:59:44.560988 ignition[557]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:44.561027 ignition[557]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:44.562579 ignition[557]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:44.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.564266 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 03:59:44.562783 ignition[557]: parsed url from cmdline: "" Dec 13 03:59:44.562791 ignition[557]: no config URL provided Dec 13 03:59:44.562820 ignition[557]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:59:44.567097 systemd[1]: Starting ignition-fetch.service... Dec 13 03:59:44.562839 ignition[557]: no config at "/usr/lib/ignition/user.ign" Dec 13 03:59:44.562854 ignition[557]: failed to fetch config: resource requires networking Dec 13 03:59:44.563007 ignition[557]: Ignition finished successfully Dec 13 03:59:44.584259 ignition[656]: Ignition 2.14.0 Dec 13 03:59:44.584282 ignition[656]: Stage: fetch Dec 13 03:59:44.584473 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:44.584509 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:44.586230 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:44.586405 ignition[656]: parsed url from cmdline: "" Dec 13 03:59:44.586412 ignition[656]: no config URL provided Dec 13 03:59:44.586423 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:59:44.586437 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 03:59:44.591693 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 03:59:44.591747 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 03:59:44.591758 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 03:59:44.939516 ignition[656]: GET result: OK Dec 13 03:59:44.939696 ignition[656]: parsing config with SHA512: cde9487b15962c917bcacd14e2e3ee94866361a5e5c9279d459829aab5311b741346a034073fc5cb5113550afb811c6aa06339147225dc87509a3e3e6c19b376 Dec 13 03:59:44.961120 unknown[656]: fetched base config from "system" Dec 13 03:59:44.961156 unknown[656]: fetched base config from "system" Dec 13 03:59:44.963023 ignition[656]: fetch: fetch complete Dec 13 03:59:44.961190 unknown[656]: fetched user config from "openstack" Dec 13 03:59:44.963040 ignition[656]: fetch: fetch passed Dec 13 03:59:44.967433 systemd[1]: Finished ignition-fetch.service. Dec 13 03:59:44.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.963149 ignition[656]: Ignition finished successfully Dec 13 03:59:44.971473 systemd[1]: Starting ignition-kargs.service... Dec 13 03:59:44.991766 ignition[662]: Ignition 2.14.0 Dec 13 03:59:44.991794 ignition[662]: Stage: kargs Dec 13 03:59:44.992076 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:44.992119 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:44.994538 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:45.020274 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 03:59:45.020326 kernel: audit: type=1130 audit(1734062385.007:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:44.997446 ignition[662]: kargs: kargs passed Dec 13 03:59:45.007216 systemd[1]: Finished ignition-kargs.service. Dec 13 03:59:44.997539 ignition[662]: Ignition finished successfully Dec 13 03:59:45.010143 systemd[1]: Starting ignition-disks.service... Dec 13 03:59:45.037518 ignition[667]: Ignition 2.14.0 Dec 13 03:59:45.039129 ignition[667]: Stage: disks Dec 13 03:59:45.040483 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:45.042236 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:45.044515 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:45.048717 ignition[667]: disks: disks passed Dec 13 03:59:45.050060 ignition[667]: Ignition finished successfully Dec 13 03:59:45.052989 systemd[1]: Finished ignition-disks.service. Dec 13 03:59:45.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.054777 systemd[1]: Reached target initrd-root-device.target. Dec 13 03:59:45.059355 kernel: audit: type=1130 audit(1734062385.053:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.059725 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:59:45.060223 systemd[1]: Reached target local-fs.target. Dec 13 03:59:45.061871 systemd[1]: Reached target sysinit.target. Dec 13 03:59:45.063447 systemd[1]: Reached target basic.target. Dec 13 03:59:45.065868 systemd[1]: Starting systemd-fsck-root.service... Dec 13 03:59:45.091199 systemd-fsck[675]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 03:59:45.102237 systemd[1]: Finished systemd-fsck-root.service. Dec 13 03:59:45.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.105264 systemd[1]: Mounting sysroot.mount... Dec 13 03:59:45.114738 kernel: audit: type=1130 audit(1734062385.102:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.133929 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 03:59:45.132840 systemd[1]: Mounted sysroot.mount. Dec 13 03:59:45.135158 systemd[1]: Reached target initrd-root-fs.target. Dec 13 03:59:45.138983 systemd[1]: Mounting sysroot-usr.mount... Dec 13 03:59:45.140890 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 03:59:45.142353 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 03:59:45.147795 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 03:59:45.148521 systemd[1]: Reached target ignition-diskful.target. Dec 13 03:59:45.156134 systemd[1]: Mounted sysroot-usr.mount. Dec 13 03:59:45.169214 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:59:45.174319 systemd[1]: Starting initrd-setup-root.service... Dec 13 03:59:45.193905 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 03:59:45.196522 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Dec 13 03:59:45.202714 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:59:45.202745 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:59:45.202757 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:59:45.211747 initrd-setup-root[711]: cut: /sysroot/etc/group: No such file or directory Dec 13 03:59:45.220263 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:59:45.224224 initrd-setup-root[721]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 03:59:45.235328 initrd-setup-root[730]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 03:59:45.325215 systemd[1]: Finished initrd-setup-root.service. Dec 13 03:59:45.336527 kernel: audit: type=1130 audit(1734062385.326:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.328793 systemd[1]: Starting ignition-mount.service... Dec 13 03:59:45.339352 systemd[1]: Starting sysroot-boot.service... Dec 13 03:59:45.350698 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 03:59:45.350842 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 03:59:45.364738 ignition[749]: INFO : Ignition 2.14.0 Dec 13 03:59:45.365555 ignition[749]: INFO : Stage: mount Dec 13 03:59:45.366305 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:45.367019 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:45.368999 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:45.371703 ignition[749]: INFO : mount: mount passed Dec 13 03:59:45.372492 ignition[749]: INFO : Ignition finished successfully Dec 13 03:59:45.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.374059 systemd[1]: Finished ignition-mount.service. Dec 13 03:59:45.378824 kernel: audit: type=1130 audit(1734062385.373:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.392272 systemd[1]: Finished sysroot-boot.service. Dec 13 03:59:45.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.397856 kernel: audit: type=1130 audit(1734062385.391:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.398276 coreos-metadata[681]: Dec 13 03:59:45.398 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 03:59:45.419375 coreos-metadata[681]: Dec 13 03:59:45.419 INFO Fetch successful Dec 13 03:59:45.420003 coreos-metadata[681]: Dec 13 03:59:45.419 INFO wrote hostname ci-3510-3-6-f-1413c5ec2e.novalocal to /sysroot/etc/hostname Dec 13 03:59:45.425727 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 03:59:45.426033 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 03:59:45.436542 kernel: audit: type=1130 audit(1734062385.426:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.436563 kernel: audit: type=1131 audit(1734062385.426:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:45.429700 systemd[1]: Starting ignition-files.service... Dec 13 03:59:45.442882 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:59:45.458859 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Dec 13 03:59:45.462867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:59:45.462977 kernel: BTRFS info (device vda6): using free space tree Dec 13 03:59:45.463019 kernel: BTRFS info (device vda6): has skinny extents Dec 13 03:59:45.471850 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:59:45.484585 ignition[777]: INFO : Ignition 2.14.0 Dec 13 03:59:45.485380 ignition[777]: INFO : Stage: files Dec 13 03:59:45.486035 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:45.486785 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:45.488831 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:45.495144 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Dec 13 03:59:45.496742 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 03:59:45.497537 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 03:59:45.503188 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 03:59:45.504167 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 03:59:45.506284 unknown[777]: wrote ssh authorized keys file for user: core Dec 13 03:59:45.507027 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 03:59:45.508160 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:59:45.509193 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 03:59:45.567394 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 03:59:45.863212 systemd-networkd[633]: eth0: Gained IPv6LL Dec 13 03:59:45.867790 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:59:45.870540 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:59:45.870540 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 03:59:46.396638 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 03:59:46.844383 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:59:46.844383 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 03:59:46.847228 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 03:59:47.174846 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 03:59:49.032711 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 03:59:49.032711 ignition[777]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:59:49.032711 ignition[777]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:59:49.032711 ignition[777]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 03:59:49.041999 ignition[777]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 03:59:49.041999 ignition[777]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:59:49.041999 ignition[777]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:59:49.041999 ignition[777]: INFO : files: files passed Dec 13 03:59:49.041999 ignition[777]: INFO : Ignition finished successfully Dec 13 03:59:49.062585 kernel: audit: type=1130 audit(1734062389.042:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.040777 systemd[1]: Finished ignition-files.service. Dec 13 03:59:49.044826 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 03:59:49.054364 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 03:59:49.069913 kernel: audit: type=1130 audit(1734062389.064:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.057035 systemd[1]: Starting ignition-quench.service... Dec 13 03:59:49.070622 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 03:59:49.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.064233 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 03:59:49.064428 systemd[1]: Finished ignition-quench.service. Dec 13 03:59:49.070234 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 03:59:49.072111 systemd[1]: Reached target ignition-complete.target. Dec 13 03:59:49.075206 systemd[1]: Starting initrd-parse-etc.service... Dec 13 03:59:49.095936 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 03:59:49.096979 systemd[1]: Finished initrd-parse-etc.service. Dec 13 03:59:49.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.097955 systemd[1]: Reached target initrd-fs.target. Dec 13 03:59:49.099426 systemd[1]: Reached target initrd.target. Dec 13 03:59:49.100948 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 03:59:49.101634 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 03:59:49.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.123255 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 03:59:49.124430 systemd[1]: Starting initrd-cleanup.service... Dec 13 03:59:49.138795 systemd[1]: Stopped target nss-lookup.target. Dec 13 03:59:49.139941 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 03:59:49.140985 systemd[1]: Stopped target timers.target. Dec 13 03:59:49.141985 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 03:59:49.142682 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 03:59:49.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.143882 systemd[1]: Stopped target initrd.target. Dec 13 03:59:49.144863 systemd[1]: Stopped target basic.target. Dec 13 03:59:49.145839 systemd[1]: Stopped target ignition-complete.target. Dec 13 03:59:49.146862 systemd[1]: Stopped target ignition-diskful.target. Dec 13 03:59:49.147897 systemd[1]: Stopped target initrd-root-device.target. Dec 13 03:59:49.148938 systemd[1]: Stopped target remote-fs.target. Dec 13 03:59:49.149937 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 03:59:49.151028 systemd[1]: Stopped target sysinit.target. Dec 13 03:59:49.152013 systemd[1]: Stopped target local-fs.target. Dec 13 03:59:49.152998 systemd[1]: Stopped target local-fs-pre.target. Dec 13 03:59:49.154001 systemd[1]: Stopped target swap.target. Dec 13 03:59:49.154930 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 03:59:49.155569 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 03:59:49.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.156696 systemd[1]: Stopped target cryptsetup.target. Dec 13 03:59:49.157652 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 03:59:49.158350 systemd[1]: Stopped dracut-initqueue.service. Dec 13 03:59:49.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.159502 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 03:59:49.160300 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 03:59:49.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.161441 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 03:59:49.161556 systemd[1]: Stopped ignition-files.service. Dec 13 03:59:49.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.163987 systemd[1]: Stopping ignition-mount.service... Dec 13 03:59:49.167166 iscsid[642]: iscsid shutting down. Dec 13 03:59:49.169855 systemd[1]: Stopping iscsid.service... Dec 13 03:59:49.171141 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 03:59:49.171881 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 03:59:49.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.173973 systemd[1]: Stopping sysroot-boot.service... Dec 13 03:59:49.175078 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 03:59:49.175976 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 03:59:49.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.177243 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 03:59:49.177991 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 03:59:49.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.181008 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 03:59:49.181614 systemd[1]: Stopped iscsid.service. Dec 13 03:59:49.182501 ignition[815]: INFO : Ignition 2.14.0 Dec 13 03:59:49.182501 ignition[815]: INFO : Stage: umount Dec 13 03:59:49.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.184071 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:59:49.184071 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 03:59:49.188847 systemd[1]: Stopping iscsiuio.service... Dec 13 03:59:49.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.192164 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 03:59:49.192254 systemd[1]: Finished initrd-cleanup.service. Dec 13 03:59:49.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.197291 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 03:59:49.196289 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 03:59:49.196374 systemd[1]: Stopped iscsiuio.service. Dec 13 03:59:49.200374 ignition[815]: INFO : umount: umount passed Dec 13 03:59:49.200374 ignition[815]: INFO : Ignition finished successfully Dec 13 03:59:49.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.202547 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 03:59:49.202627 systemd[1]: Stopped ignition-mount.service. Dec 13 03:59:49.203165 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 03:59:49.203206 systemd[1]: Stopped ignition-disks.service. Dec 13 03:59:49.203668 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 03:59:49.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.203705 systemd[1]: Stopped ignition-kargs.service. Dec 13 03:59:49.204214 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 03:59:49.204251 systemd[1]: Stopped ignition-fetch.service. Dec 13 03:59:49.204718 systemd[1]: Stopped target network.target. Dec 13 03:59:49.205169 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 03:59:49.205209 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 03:59:49.205699 systemd[1]: Stopped target paths.target. Dec 13 03:59:49.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.221000 audit: BPF prog-id=6 op=UNLOAD Dec 13 03:59:49.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.206154 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 03:59:49.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.209011 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 03:59:49.209528 systemd[1]: Stopped target slices.target. Dec 13 03:59:49.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.209956 systemd[1]: Stopped target sockets.target. Dec 13 03:59:49.210387 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 03:59:49.210420 systemd[1]: Closed iscsid.socket. Dec 13 03:59:49.210854 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 03:59:49.210886 systemd[1]: Closed iscsiuio.socket. Dec 13 03:59:49.211288 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 03:59:49.211325 systemd[1]: Stopped ignition-setup.service. Dec 13 03:59:49.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.211854 systemd[1]: Stopping systemd-networkd.service... Dec 13 03:59:49.212531 systemd[1]: Stopping systemd-resolved.service... Dec 13 03:59:49.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.217225 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 03:59:49.217323 systemd[1]: Stopped systemd-resolved.service. Dec 13 03:59:49.217889 systemd-networkd[633]: eth0: DHCPv6 lease lost Dec 13 03:59:49.240000 audit: BPF prog-id=9 op=UNLOAD Dec 13 03:59:49.220158 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 03:59:49.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.220244 systemd[1]: Stopped systemd-networkd.service. Dec 13 03:59:49.221371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 03:59:49.221401 systemd[1]: Closed systemd-networkd.socket. Dec 13 03:59:49.222583 systemd[1]: Stopping network-cleanup.service... Dec 13 03:59:49.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.223076 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 03:59:49.223133 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 03:59:49.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.223615 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:59:49.223651 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:59:49.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.225592 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 03:59:49.225630 systemd[1]: Stopped systemd-modules-load.service. Dec 13 03:59:49.226755 systemd[1]: Stopping systemd-udevd.service... Dec 13 03:59:49.231839 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:59:49.235451 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 03:59:49.235537 systemd[1]: Stopped network-cleanup.service. Dec 13 03:59:49.238189 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 03:59:49.238601 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 03:59:49.238724 systemd[1]: Stopped systemd-udevd.service. Dec 13 03:59:49.240996 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 03:59:49.241033 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 03:59:49.242375 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 03:59:49.242407 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 03:59:49.242996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 03:59:49.243033 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 03:59:49.243501 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 03:59:49.243538 systemd[1]: Stopped dracut-cmdline.service. Dec 13 03:59:49.244074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 03:59:49.244112 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 03:59:49.246638 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 03:59:49.249098 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 03:59:49.249142 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 03:59:49.250100 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 03:59:49.250185 systemd[1]: Stopped sysroot-boot.service. Dec 13 03:59:49.251753 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 03:59:49.251792 systemd[1]: Stopped initrd-setup-root.service. Dec 13 03:59:49.253454 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 03:59:49.253533 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 03:59:49.254088 systemd[1]: Reached target initrd-switch-root.target. Dec 13 03:59:49.255689 systemd[1]: Starting initrd-switch-root.service... Dec 13 03:59:49.275777 systemd[1]: Switching root. Dec 13 03:59:49.293543 systemd-journald[186]: Journal stopped Dec 13 03:59:54.226540 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Dec 13 03:59:54.226593 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 03:59:54.226615 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 03:59:54.226629 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 03:59:54.226645 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 03:59:54.226658 kernel: SELinux: policy capability open_perms=1 Dec 13 03:59:54.226671 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 03:59:54.226686 kernel: SELinux: policy capability always_check_network=0 Dec 13 03:59:54.226699 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 03:59:54.226711 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 03:59:54.226723 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 03:59:54.226739 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 03:59:54.226755 systemd[1]: Successfully loaded SELinux policy in 86.062ms. Dec 13 03:59:54.226774 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.506ms. Dec 13 03:59:54.226790 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:59:54.226822 systemd[1]: Detected virtualization kvm. Dec 13 03:59:54.226838 systemd[1]: Detected architecture x86-64. Dec 13 03:59:54.226851 systemd[1]: Detected first boot. Dec 13 03:59:54.226865 systemd[1]: Hostname set to . Dec 13 03:59:54.226879 systemd[1]: Initializing machine ID from VM UUID. Dec 13 03:59:54.226896 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 03:59:54.226909 systemd[1]: Populated /etc with preset unit settings. Dec 13 03:59:54.226924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:59:54.226940 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:59:54.226959 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:59:54.226973 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 03:59:54.226988 kernel: audit: type=1334 audit(1734062394.010:89): prog-id=12 op=LOAD Dec 13 03:59:54.227005 kernel: audit: type=1334 audit(1734062394.010:90): prog-id=3 op=UNLOAD Dec 13 03:59:54.227018 kernel: audit: type=1334 audit(1734062394.013:91): prog-id=13 op=LOAD Dec 13 03:59:54.227031 kernel: audit: type=1334 audit(1734062394.014:92): prog-id=14 op=LOAD Dec 13 03:59:54.227043 kernel: audit: type=1334 audit(1734062394.015:93): prog-id=4 op=UNLOAD Dec 13 03:59:54.227056 kernel: audit: type=1334 audit(1734062394.015:94): prog-id=5 op=UNLOAD Dec 13 03:59:54.227068 kernel: audit: type=1334 audit(1734062394.019:95): prog-id=15 op=LOAD Dec 13 03:59:54.227080 kernel: audit: type=1334 audit(1734062394.019:96): prog-id=12 op=UNLOAD Dec 13 03:59:54.227092 kernel: audit: type=1334 audit(1734062394.020:97): prog-id=16 op=LOAD Dec 13 03:59:54.227107 kernel: audit: type=1334 audit(1734062394.022:98): prog-id=17 op=LOAD Dec 13 03:59:54.227120 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 03:59:54.227135 systemd[1]: Stopped initrd-switch-root.service. Dec 13 03:59:54.227149 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 03:59:54.227163 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 03:59:54.227177 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 03:59:54.227193 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 03:59:54.227207 systemd[1]: Created slice system-getty.slice. Dec 13 03:59:54.227222 systemd[1]: Created slice system-modprobe.slice. Dec 13 03:59:54.227237 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 03:59:54.227251 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 03:59:54.227266 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 03:59:54.227280 systemd[1]: Created slice user.slice. Dec 13 03:59:54.227294 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:59:54.227307 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 03:59:54.227323 systemd[1]: Set up automount boot.automount. Dec 13 03:59:54.227337 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 03:59:54.227351 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 03:59:54.227365 systemd[1]: Stopped target initrd-fs.target. Dec 13 03:59:54.227379 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 03:59:54.227392 systemd[1]: Reached target integritysetup.target. Dec 13 03:59:54.227406 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:59:54.230883 systemd[1]: Reached target remote-fs.target. Dec 13 03:59:54.230908 systemd[1]: Reached target slices.target. Dec 13 03:59:54.230926 systemd[1]: Reached target swap.target. Dec 13 03:59:54.230939 systemd[1]: Reached target torcx.target. Dec 13 03:59:54.230954 systemd[1]: Reached target veritysetup.target. Dec 13 03:59:54.230968 systemd[1]: Listening on systemd-coredump.socket. Dec 13 03:59:54.230982 systemd[1]: Listening on systemd-initctl.socket. Dec 13 03:59:54.230996 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:59:54.231010 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:59:54.231023 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:59:54.231037 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 03:59:54.231053 systemd[1]: Mounting dev-hugepages.mount... Dec 13 03:59:54.231067 systemd[1]: Mounting dev-mqueue.mount... Dec 13 03:59:54.231080 systemd[1]: Mounting media.mount... Dec 13 03:59:54.231095 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:54.231109 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 03:59:54.231123 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 03:59:54.231136 systemd[1]: Mounting tmp.mount... Dec 13 03:59:54.231150 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 03:59:54.231164 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:59:54.231180 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:59:54.231194 systemd[1]: Starting modprobe@configfs.service... Dec 13 03:59:54.231207 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:59:54.231221 systemd[1]: Starting modprobe@drm.service... Dec 13 03:59:54.231237 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:59:54.231251 systemd[1]: Starting modprobe@fuse.service... Dec 13 03:59:54.231265 systemd[1]: Starting modprobe@loop.service... Dec 13 03:59:54.231279 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 03:59:54.231293 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 03:59:54.231309 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 03:59:54.231322 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 03:59:54.231336 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 03:59:54.231350 systemd[1]: Stopped systemd-journald.service. Dec 13 03:59:54.231363 systemd[1]: Starting systemd-journald.service... Dec 13 03:59:54.231377 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:59:54.231391 systemd[1]: Starting systemd-network-generator.service... Dec 13 03:59:54.231404 systemd[1]: Starting systemd-remount-fs.service... Dec 13 03:59:54.231418 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:59:54.231430 kernel: loop: module loaded Dec 13 03:59:54.231446 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 03:59:54.231460 systemd[1]: Stopped verity-setup.service. Dec 13 03:59:54.231474 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:54.231488 systemd[1]: Mounted dev-hugepages.mount. Dec 13 03:59:54.231501 systemd[1]: Mounted dev-mqueue.mount. Dec 13 03:59:54.231515 systemd[1]: Mounted media.mount. Dec 13 03:59:54.231528 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 03:59:54.231542 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 03:59:54.231558 systemd[1]: Mounted tmp.mount. Dec 13 03:59:54.231571 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:59:54.231585 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 03:59:54.231598 systemd[1]: Finished modprobe@configfs.service. Dec 13 03:59:54.231612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:59:54.231625 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:59:54.231640 kernel: fuse: init (API version 7.34) Dec 13 03:59:54.231654 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:59:54.231678 systemd-journald[926]: Journal started Dec 13 03:59:54.231732 systemd-journald[926]: Runtime Journal (/run/log/journal/6084184725254990b2b426934b92433e) is 4.9M, max 39.5M, 34.5M free. Dec 13 03:59:49.565000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 03:59:49.705000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:59:49.705000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:59:49.705000 audit: BPF prog-id=10 op=LOAD Dec 13 03:59:49.705000 audit: BPF prog-id=10 op=UNLOAD Dec 13 03:59:49.705000 audit: BPF prog-id=11 op=LOAD Dec 13 03:59:49.705000 audit: BPF prog-id=11 op=UNLOAD Dec 13 03:59:49.927000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 03:59:49.927000 audit[848]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:59:49.927000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:59:49.932000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 03:59:49.932000 audit[848]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:59:49.932000 audit: CWD cwd="/" Dec 13 03:59:49.932000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:49.932000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:49.932000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:59:54.010000 audit: BPF prog-id=12 op=LOAD Dec 13 03:59:54.010000 audit: BPF prog-id=3 op=UNLOAD Dec 13 03:59:54.013000 audit: BPF prog-id=13 op=LOAD Dec 13 03:59:54.014000 audit: BPF prog-id=14 op=LOAD Dec 13 03:59:54.015000 audit: BPF prog-id=4 op=UNLOAD Dec 13 03:59:54.015000 audit: BPF prog-id=5 op=UNLOAD Dec 13 03:59:54.019000 audit: BPF prog-id=15 op=LOAD Dec 13 03:59:54.019000 audit: BPF prog-id=12 op=UNLOAD Dec 13 03:59:54.020000 audit: BPF prog-id=16 op=LOAD Dec 13 03:59:54.022000 audit: BPF prog-id=17 op=LOAD Dec 13 03:59:54.022000 audit: BPF prog-id=13 op=UNLOAD Dec 13 03:59:54.022000 audit: BPF prog-id=14 op=UNLOAD Dec 13 03:59:54.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.034000 audit: BPF prog-id=15 op=UNLOAD Dec 13 03:59:54.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.168000 audit: BPF prog-id=18 op=LOAD Dec 13 03:59:54.169000 audit: BPF prog-id=19 op=LOAD Dec 13 03:59:54.169000 audit: BPF prog-id=20 op=LOAD Dec 13 03:59:54.169000 audit: BPF prog-id=16 op=UNLOAD Dec 13 03:59:54.169000 audit: BPF prog-id=17 op=UNLOAD Dec 13 03:59:54.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.224000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:59:54.224000 audit[926]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffffd89ec90 a2=4000 a3=7ffffd89ed2c items=0 ppid=1 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:59:54.224000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 03:59:49.921256 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:59:54.009454 systemd[1]: Queued start job for default target multi-user.target. Dec 13 03:59:49.922402 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:59:54.009465 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 03:59:49.922455 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:59:54.023977 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 03:59:49.922526 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 03:59:49.922554 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 03:59:49.922632 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 03:59:49.922668 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 03:59:54.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:49.923183 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 03:59:49.923281 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:59:49.923318 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:59:49.926622 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 03:59:49.926716 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 03:59:49.926766 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 03:59:49.926851 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 03:59:49.926902 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 03:59:49.926941 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 03:59:52.958883 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:59:52.959203 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:59:54.235658 systemd[1]: Finished modprobe@drm.service. Dec 13 03:59:54.235685 systemd[1]: Started systemd-journald.service. Dec 13 03:59:52.959327 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:59:52.959537 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:59:52.959597 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 03:59:52.959667 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-12-13T03:59:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 03:59:54.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.236895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:59:54.237045 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:59:54.237798 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 03:59:54.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.238024 systemd[1]: Finished modprobe@fuse.service. Dec 13 03:59:54.238747 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:59:54.239930 systemd[1]: Finished modprobe@loop.service. Dec 13 03:59:54.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.240643 systemd[1]: Finished systemd-network-generator.service. Dec 13 03:59:54.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.241531 systemd[1]: Finished systemd-remount-fs.service. Dec 13 03:59:54.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.243340 systemd[1]: Reached target network-pre.target. Dec 13 03:59:54.246188 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 03:59:54.247630 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 03:59:54.250969 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 03:59:54.252960 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 03:59:54.255020 systemd[1]: Starting systemd-journal-flush.service... Dec 13 03:59:54.255547 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:59:54.256438 systemd[1]: Starting systemd-random-seed.service... Dec 13 03:59:54.257907 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:59:54.259198 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 03:59:54.259936 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 03:59:54.264959 systemd-journald[926]: Time spent on flushing to /var/log/journal/6084184725254990b2b426934b92433e is 40.847ms for 1105 entries. Dec 13 03:59:54.264959 systemd-journald[926]: System Journal (/var/log/journal/6084184725254990b2b426934b92433e) is 8.0M, max 584.8M, 576.8M free. Dec 13 03:59:54.361965 systemd-journald[926]: Received client request to flush runtime journal. Dec 13 03:59:54.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:54.266640 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:59:54.268787 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:59:54.269643 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 03:59:54.273057 systemd[1]: Starting systemd-sysusers.service... Dec 13 03:59:54.362943 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 03:59:54.289733 systemd[1]: Finished systemd-random-seed.service. Dec 13 03:59:54.290434 systemd[1]: Reached target first-boot-complete.target. Dec 13 03:59:54.314431 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:59:54.322572 systemd[1]: Finished systemd-sysusers.service. Dec 13 03:59:54.340611 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:59:54.342243 systemd[1]: Starting systemd-udev-settle.service... Dec 13 03:59:54.362827 systemd[1]: Finished systemd-journal-flush.service. Dec 13 03:59:54.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.356931 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 03:59:55.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.358000 audit: BPF prog-id=21 op=LOAD Dec 13 03:59:55.359000 audit: BPF prog-id=22 op=LOAD Dec 13 03:59:55.359000 audit: BPF prog-id=7 op=UNLOAD Dec 13 03:59:55.359000 audit: BPF prog-id=8 op=UNLOAD Dec 13 03:59:55.361467 systemd[1]: Starting systemd-udevd.service... Dec 13 03:59:55.400615 systemd-udevd[959]: Using default interface naming scheme 'v252'. Dec 13 03:59:55.462039 systemd[1]: Started systemd-udevd.service. Dec 13 03:59:55.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.466000 audit: BPF prog-id=23 op=LOAD Dec 13 03:59:55.470135 systemd[1]: Starting systemd-networkd.service... Dec 13 03:59:55.489000 audit: BPF prog-id=24 op=LOAD Dec 13 03:59:55.490000 audit: BPF prog-id=25 op=LOAD Dec 13 03:59:55.490000 audit: BPF prog-id=26 op=LOAD Dec 13 03:59:55.494161 systemd[1]: Starting systemd-userdbd.service... Dec 13 03:59:55.547437 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 03:59:55.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.570489 systemd[1]: Started systemd-userdbd.service. Dec 13 03:59:55.634864 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 03:59:55.642878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:59:55.652825 kernel: ACPI: button: Power Button [PWRF] Dec 13 03:59:55.663000 audit[968]: AVC avc: denied { confidentiality } for pid=968 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:59:55.670399 systemd-networkd[971]: lo: Link UP Dec 13 03:59:55.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.670409 systemd-networkd[971]: lo: Gained carrier Dec 13 03:59:55.670884 systemd-networkd[971]: Enumeration completed Dec 13 03:59:55.671300 systemd[1]: Started systemd-networkd.service. Dec 13 03:59:55.671631 systemd-networkd[971]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:59:55.673232 systemd-networkd[971]: eth0: Link UP Dec 13 03:59:55.673241 systemd-networkd[971]: eth0: Gained carrier Dec 13 03:59:55.663000 audit[968]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5598fd24e940 a1=337fc a2=7fcbc5400bc5 a3=5 items=110 ppid=959 pid=968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:59:55.663000 audit: CWD cwd="/" Dec 13 03:59:55.663000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=1 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=2 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=3 name=(null) inode=13312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=4 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=5 name=(null) inode=14337 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=6 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=7 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=8 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=9 name=(null) inode=14339 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=10 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=11 name=(null) inode=14340 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=12 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=13 name=(null) inode=14341 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=14 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=15 name=(null) inode=14342 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=16 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=17 name=(null) inode=14343 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=18 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=19 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=20 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=21 name=(null) inode=14345 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=22 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=23 name=(null) inode=14346 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=24 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=25 name=(null) inode=14347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=26 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=27 name=(null) inode=14348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=28 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=29 name=(null) inode=14349 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=30 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=31 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=32 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=33 name=(null) inode=14351 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=34 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=35 name=(null) inode=14352 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=36 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=37 name=(null) inode=14353 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=38 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=39 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=40 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=41 name=(null) inode=14355 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=42 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=43 name=(null) inode=14356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=44 name=(null) inode=14356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=45 name=(null) inode=14357 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=46 name=(null) inode=14356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=47 name=(null) inode=14358 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=48 name=(null) inode=14356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=49 name=(null) inode=14359 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=50 name=(null) inode=14356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=51 name=(null) inode=14360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=52 name=(null) inode=14356 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=53 name=(null) inode=14361 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=55 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=56 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=57 name=(null) inode=14363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=58 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=59 name=(null) inode=14364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=60 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=61 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=62 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=63 name=(null) inode=14366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=64 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=65 name=(null) inode=14367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=66 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=67 name=(null) inode=14368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=68 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=69 name=(null) inode=14369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=70 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=71 name=(null) inode=14370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=72 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=73 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=74 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=75 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=76 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=77 name=(null) inode=14373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=78 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=79 name=(null) inode=14374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=80 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=81 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=82 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=83 name=(null) inode=14376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=84 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=85 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=86 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=87 name=(null) inode=14378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=88 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=89 name=(null) inode=14379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=90 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=91 name=(null) inode=14380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=92 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=93 name=(null) inode=14381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=94 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=95 name=(null) inode=14382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=96 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=97 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=98 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=99 name=(null) inode=14384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=100 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=101 name=(null) inode=14385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=102 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=103 name=(null) inode=14386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=104 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=105 name=(null) inode=14387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=106 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=107 name=(null) inode=14388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PATH item=109 name=(null) inode=14396 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:59:55.663000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 03:59:55.687838 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 03:59:55.688926 systemd-networkd[971]: eth0: DHCPv4 address 172.24.4.115/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 03:59:55.696836 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 03:59:55.703855 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 03:59:55.747280 systemd[1]: Finished systemd-udev-settle.service. Dec 13 03:59:55.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.749343 systemd[1]: Starting lvm2-activation-early.service... Dec 13 03:59:55.781656 lvm[988]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:59:55.814830 systemd[1]: Finished lvm2-activation-early.service. Dec 13 03:59:55.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.815455 systemd[1]: Reached target cryptsetup.target. Dec 13 03:59:55.816955 systemd[1]: Starting lvm2-activation.service... Dec 13 03:59:55.826414 lvm[989]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:59:55.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.858863 systemd[1]: Finished lvm2-activation.service. Dec 13 03:59:55.859419 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:59:55.859859 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 03:59:55.859879 systemd[1]: Reached target local-fs.target. Dec 13 03:59:55.860286 systemd[1]: Reached target machines.target. Dec 13 03:59:55.861873 systemd[1]: Starting ldconfig.service... Dec 13 03:59:55.864312 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:59:55.864363 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:55.865507 systemd[1]: Starting systemd-boot-update.service... Dec 13 03:59:55.867398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 03:59:55.869288 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 03:59:55.873722 systemd[1]: Starting systemd-sysext.service... Dec 13 03:59:55.888133 systemd[1]: boot.automount: Got automount request for /boot, triggered by 991 (bootctl) Dec 13 03:59:55.889582 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 03:59:55.897508 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 03:59:55.918520 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 03:59:55.918716 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 03:59:55.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:55.937455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 03:59:55.948840 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 03:59:56.426967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 03:59:56.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.436499 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 03:59:56.477086 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 03:59:56.518872 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 03:59:56.592400 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Dec 13 03:59:56.592400 systemd-fsck[1003]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 03:59:56.594600 (sd-sysext)[1006]: Using extensions 'kubernetes'. Dec 13 03:59:56.598098 (sd-sysext)[1006]: Merged extensions into '/usr'. Dec 13 03:59:56.598311 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 03:59:56.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.602622 systemd[1]: Mounting boot.mount... Dec 13 03:59:56.635179 systemd[1]: Mounted boot.mount. Dec 13 03:59:56.648612 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:56.651728 systemd[1]: Mounting usr-share-oem.mount... Dec 13 03:59:56.652524 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:59:56.654227 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:59:56.656221 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:59:56.662163 systemd[1]: Starting modprobe@loop.service... Dec 13 03:59:56.662911 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:59:56.663074 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:56.663225 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:56.664501 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:59:56.664671 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:59:56.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.668673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:59:56.668888 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:59:56.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.669980 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:59:56.673231 systemd[1]: Finished systemd-boot-update.service. Dec 13 03:59:56.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.674739 systemd[1]: Mounted usr-share-oem.mount. Dec 13 03:59:56.676765 systemd[1]: Finished systemd-sysext.service. Dec 13 03:59:56.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.677660 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:59:56.677842 systemd[1]: Finished modprobe@loop.service. Dec 13 03:59:56.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:56.680559 systemd[1]: Starting ensure-sysext.service... Dec 13 03:59:56.681169 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:59:56.682273 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 03:59:56.691858 systemd[1]: Reloading. Dec 13 03:59:56.718717 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 03:59:56.745502 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 03:59:56.769485 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 03:59:56.780489 /usr/lib/systemd/system-generators/torcx-generator[1033]: time="2024-12-13T03:59:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:59:56.780525 /usr/lib/systemd/system-generators/torcx-generator[1033]: time="2024-12-13T03:59:56Z" level=info msg="torcx already run" Dec 13 03:59:56.872298 systemd-networkd[971]: eth0: Gained IPv6LL Dec 13 03:59:56.915718 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:59:56.915761 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:59:56.943237 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:59:57.012000 audit: BPF prog-id=27 op=LOAD Dec 13 03:59:57.012000 audit: BPF prog-id=18 op=UNLOAD Dec 13 03:59:57.012000 audit: BPF prog-id=28 op=LOAD Dec 13 03:59:57.012000 audit: BPF prog-id=29 op=LOAD Dec 13 03:59:57.012000 audit: BPF prog-id=19 op=UNLOAD Dec 13 03:59:57.012000 audit: BPF prog-id=20 op=UNLOAD Dec 13 03:59:57.014000 audit: BPF prog-id=30 op=LOAD Dec 13 03:59:57.014000 audit: BPF prog-id=31 op=LOAD Dec 13 03:59:57.014000 audit: BPF prog-id=21 op=UNLOAD Dec 13 03:59:57.014000 audit: BPF prog-id=22 op=UNLOAD Dec 13 03:59:57.016000 audit: BPF prog-id=32 op=LOAD Dec 13 03:59:57.016000 audit: BPF prog-id=24 op=UNLOAD Dec 13 03:59:57.016000 audit: BPF prog-id=33 op=LOAD Dec 13 03:59:57.016000 audit: BPF prog-id=34 op=LOAD Dec 13 03:59:57.017000 audit: BPF prog-id=25 op=UNLOAD Dec 13 03:59:57.017000 audit: BPF prog-id=26 op=UNLOAD Dec 13 03:59:57.018000 audit: BPF prog-id=35 op=LOAD Dec 13 03:59:57.018000 audit: BPF prog-id=23 op=UNLOAD Dec 13 03:59:57.042481 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:57.042709 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.044413 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:59:57.046043 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:59:57.048610 systemd[1]: Starting modprobe@loop.service... Dec 13 03:59:57.049378 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.049498 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:57.049619 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:57.051687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:59:57.051947 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:59:57.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.053347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:59:57.053751 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:59:57.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.054744 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:59:57.055076 systemd[1]: Finished modprobe@loop.service. Dec 13 03:59:57.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.056248 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:59:57.057067 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.059732 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:57.060234 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.062825 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:59:57.065216 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:59:57.067891 systemd[1]: Starting modprobe@loop.service... Dec 13 03:59:57.068973 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.069196 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:57.069423 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:57.071127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:59:57.071257 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:59:57.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.072973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:59:57.073233 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:59:57.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.075263 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:59:57.075443 systemd[1]: Finished modprobe@loop.service. Dec 13 03:59:57.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.081713 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:57.082695 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.084552 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:59:57.087423 systemd[1]: Starting modprobe@drm.service... Dec 13 03:59:57.089239 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:59:57.091856 systemd[1]: Starting modprobe@loop.service... Dec 13 03:59:57.092503 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.092756 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:57.095065 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 03:59:57.095680 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:59:57.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.097706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:59:57.097872 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:59:57.098953 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:59:57.099066 systemd[1]: Finished modprobe@drm.service. Dec 13 03:59:57.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.100172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:59:57.100305 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:59:57.101224 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:59:57.101331 systemd[1]: Finished modprobe@loop.service. Dec 13 03:59:57.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.102474 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:59:57.102576 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:59:57.104752 systemd[1]: Finished ensure-sysext.service. Dec 13 03:59:57.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.110834 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 03:59:57.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.140606 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 03:59:57.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.143306 systemd[1]: Starting audit-rules.service... Dec 13 03:59:57.145397 systemd[1]: Starting clean-ca-certificates.service... Dec 13 03:59:57.147040 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 03:59:57.149000 audit: BPF prog-id=36 op=LOAD Dec 13 03:59:57.151000 audit: BPF prog-id=37 op=LOAD Dec 13 03:59:57.150738 systemd[1]: Starting systemd-resolved.service... Dec 13 03:59:57.153672 systemd[1]: Starting systemd-timesyncd.service... Dec 13 03:59:57.155837 systemd[1]: Starting systemd-update-utmp.service... Dec 13 03:59:57.181000 audit[1099]: SYSTEM_BOOT pid=1099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.185745 systemd[1]: Finished systemd-update-utmp.service. Dec 13 03:59:57.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.194767 systemd[1]: Finished clean-ca-certificates.service. Dec 13 03:59:57.195597 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:59:57.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.232909 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 03:59:57.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:59:57.237259 systemd[1]: Started systemd-timesyncd.service. Dec 13 03:59:57.237900 systemd[1]: Reached target time-set.target. Dec 13 03:59:57.255909 augenrules[1114]: No rules Dec 13 03:59:57.254000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 03:59:57.254000 audit[1114]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed6298280 a2=420 a3=0 items=0 ppid=1093 pid=1114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:59:57.254000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 03:59:57.256582 systemd[1]: Finished audit-rules.service. Dec 13 03:59:58.367697 systemd-timesyncd[1098]: Contacted time server 95.81.173.8:123 (0.flatcar.pool.ntp.org). Dec 13 03:59:58.367761 systemd-timesyncd[1098]: Initial clock synchronization to Fri 2024-12-13 03:59:58.367505 UTC. Dec 13 03:59:58.377677 systemd-resolved[1097]: Positive Trust Anchors: Dec 13 03:59:58.377989 ldconfig[990]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 03:59:58.378213 systemd-resolved[1097]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:59:58.378317 systemd-resolved[1097]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:59:58.481574 systemd[1]: Finished ldconfig.service. Dec 13 03:59:58.485649 systemd-resolved[1097]: Using system hostname 'ci-3510-3-6-f-1413c5ec2e.novalocal'. Dec 13 03:59:58.487480 systemd[1]: Starting systemd-update-done.service... Dec 13 03:59:58.490139 systemd[1]: Started systemd-resolved.service. Dec 13 03:59:58.491595 systemd[1]: Reached target network.target. Dec 13 03:59:58.492622 systemd[1]: Reached target network-online.target. Dec 13 03:59:58.493639 systemd[1]: Reached target nss-lookup.target. Dec 13 03:59:58.503013 systemd[1]: Finished systemd-update-done.service. Dec 13 03:59:58.504303 systemd[1]: Reached target sysinit.target. Dec 13 03:59:58.505465 systemd[1]: Started motdgen.path. Dec 13 03:59:58.506510 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 03:59:58.508218 systemd[1]: Started logrotate.timer. Dec 13 03:59:58.509413 systemd[1]: Started mdadm.timer. Dec 13 03:59:58.510377 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 03:59:58.511435 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:59:58.511501 systemd[1]: Reached target paths.target. Dec 13 03:59:58.512536 systemd[1]: Reached target timers.target. Dec 13 03:59:58.515010 systemd[1]: Listening on dbus.socket. Dec 13 03:59:58.517986 systemd[1]: Starting docker.socket... Dec 13 03:59:58.524779 systemd[1]: Listening on sshd.socket. Dec 13 03:59:58.525957 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:58.527307 systemd[1]: Listening on docker.socket. Dec 13 03:59:58.528531 systemd[1]: Reached target sockets.target. Dec 13 03:59:58.529580 systemd[1]: Reached target basic.target. Dec 13 03:59:58.530702 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:59:58.530769 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:59:58.533233 systemd[1]: Starting containerd.service... Dec 13 03:59:58.536739 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 03:59:58.540383 systemd[1]: Starting dbus.service... Dec 13 03:59:58.550362 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 03:59:58.556141 systemd[1]: Starting extend-filesystems.service... Dec 13 03:59:58.559362 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 03:59:58.564968 systemd[1]: Starting kubelet.service... Dec 13 03:59:58.571174 systemd[1]: Starting motdgen.service... Dec 13 03:59:58.577755 systemd[1]: Starting prepare-helm.service... Dec 13 03:59:58.614970 jq[1127]: false Dec 13 03:59:58.580138 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 03:59:58.582748 systemd[1]: Starting sshd-keygen.service... Dec 13 03:59:58.589353 systemd[1]: Starting systemd-logind.service... Dec 13 03:59:58.589916 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:59:58.589995 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 03:59:58.590539 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 03:59:58.591806 systemd[1]: Starting update-engine.service... Dec 13 03:59:58.629601 jq[1136]: true Dec 13 03:59:58.594634 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 03:59:58.609000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 03:59:58.609171 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 03:59:58.631802 tar[1139]: linux-amd64/helm Dec 13 03:59:58.613877 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 03:59:58.614035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 03:59:58.645561 jq[1141]: true Dec 13 03:59:58.658168 systemd[1]: Created slice system-sshd.slice. Dec 13 03:59:58.670269 extend-filesystems[1128]: Found loop1 Dec 13 03:59:58.673747 extend-filesystems[1128]: Found vda Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda1 Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda2 Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda3 Dec 13 03:59:58.674287 extend-filesystems[1128]: Found usr Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda4 Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda6 Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda7 Dec 13 03:59:58.674287 extend-filesystems[1128]: Found vda9 Dec 13 03:59:58.674287 extend-filesystems[1128]: Checking size of /dev/vda9 Dec 13 03:59:58.676758 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 03:59:58.676923 systemd[1]: Finished motdgen.service. Dec 13 03:59:58.688706 dbus-daemon[1125]: [system] SELinux support is enabled Dec 13 03:59:58.689891 systemd[1]: Started dbus.service. Dec 13 03:59:58.692353 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 03:59:58.692378 systemd[1]: Reached target system-config.target. Dec 13 03:59:58.692877 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 03:59:58.692894 systemd[1]: Reached target user-config.target. Dec 13 03:59:58.717084 extend-filesystems[1128]: Resized partition /dev/vda9 Dec 13 03:59:58.735675 extend-filesystems[1181]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 03:59:58.768705 env[1142]: time="2024-12-13T03:59:58.768309567Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 03:59:58.781727 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 03:59:58.799032 update_engine[1135]: I1213 03:59:58.795826 1135 main.cc:92] Flatcar Update Engine starting Dec 13 03:59:58.808554 systemd[1]: Started update-engine.service. Dec 13 03:59:58.856929 update_engine[1135]: I1213 03:59:58.808602 1135 update_check_scheduler.cc:74] Next update check in 9m26s Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.831067149Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.848870810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.850818082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.850868907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.851205579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.851246656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.851263988Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.851276041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.851381579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:59:58.856974 env[1142]: time="2024-12-13T03:59:58.851766120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:59:58.811379 systemd[1]: Started locksmithd.service. Dec 13 03:59:58.857350 env[1142]: time="2024-12-13T03:59:58.851911563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:59:58.857350 env[1142]: time="2024-12-13T03:59:58.851949204Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 03:59:58.857350 env[1142]: time="2024-12-13T03:59:58.852001131Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 03:59:58.857350 env[1142]: time="2024-12-13T03:59:58.852036728Z" level=info msg="metadata content store policy set" policy=shared Dec 13 03:59:58.858091 systemd-logind[1134]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 03:59:58.858135 systemd-logind[1134]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 03:59:58.859679 systemd-logind[1134]: New seat seat0. Dec 13 03:59:58.862510 systemd[1]: Started systemd-logind.service. Dec 13 03:59:58.869581 bash[1178]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:59:58.867994 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 03:59:58.873716 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 03:59:58.993315 extend-filesystems[1181]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 03:59:58.993315 extend-filesystems[1181]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 03:59:58.993315 extend-filesystems[1181]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 03:59:59.011565 extend-filesystems[1128]: Resized filesystem in /dev/vda9 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002270952Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002435931Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002475115Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002623022Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002810454Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002851892Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.002935438Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003007413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003044292Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003113522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003146544Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003212368Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003548728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 03:59:59.013842 env[1142]: time="2024-12-13T03:59:59.003919814Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 03:59:58.995123 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.004902007Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.004996804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005067838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005294553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005368041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005432552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005466225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005531798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005567455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005625433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005715242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.005756990Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.006341777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.006384567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.018704 env[1142]: time="2024-12-13T03:59:59.006414122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 03:59:58.995495 systemd[1]: Finished extend-filesystems.service. Dec 13 03:59:59.027762 env[1142]: time="2024-12-13T03:59:59.006444159Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 03:59:59.027762 env[1142]: time="2024-12-13T03:59:59.015818130Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 03:59:59.027762 env[1142]: time="2024-12-13T03:59:59.015887159Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 03:59:59.027762 env[1142]: time="2024-12-13T03:59:59.015939658Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 03:59:59.027762 env[1142]: time="2024-12-13T03:59:59.016032031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.016607099Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.016833574Z" level=info msg="Connect containerd service" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.016957687Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.019979324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.022647448Z" level=info msg="Start subscribing containerd event" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.022801277Z" level=info msg="Start recovering state" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.022922815Z" level=info msg="Start event monitor" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.022949344Z" level=info msg="Start snapshots syncer" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.022970203Z" level=info msg="Start cni network conf syncer for default" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.022989369Z" level=info msg="Start streaming server" Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.024716619Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 03:59:59.028125 env[1142]: time="2024-12-13T03:59:59.025032201Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 03:59:59.048607 systemd[1]: Started containerd.service. Dec 13 03:59:59.049557 env[1142]: time="2024-12-13T03:59:59.049528514Z" level=info msg="containerd successfully booted in 0.305065s" Dec 13 03:59:59.613043 locksmithd[1185]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 03:59:59.681173 sshd_keygen[1155]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 03:59:59.731521 systemd[1]: Finished sshd-keygen.service. Dec 13 03:59:59.733818 systemd[1]: Starting issuegen.service... Dec 13 03:59:59.735569 systemd[1]: Started sshd@0-172.24.4.115:22-172.24.4.1:49026.service. Dec 13 03:59:59.744593 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 03:59:59.744807 systemd[1]: Finished issuegen.service. Dec 13 03:59:59.747103 systemd[1]: Starting systemd-user-sessions.service... Dec 13 03:59:59.757852 systemd[1]: Finished systemd-user-sessions.service. Dec 13 03:59:59.760158 systemd[1]: Started getty@tty1.service. Dec 13 03:59:59.762008 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 03:59:59.762645 systemd[1]: Reached target getty.target. Dec 13 03:59:59.767229 tar[1139]: linux-amd64/LICENSE Dec 13 03:59:59.767493 tar[1139]: linux-amd64/README.md Dec 13 03:59:59.772425 systemd[1]: Finished prepare-helm.service. Dec 13 04:00:00.831842 sshd[1201]: Accepted publickey for core from 172.24.4.1 port 49026 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:00.837769 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:00.871881 systemd[1]: Created slice user-500.slice. Dec 13 04:00:00.877191 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 04:00:00.882004 systemd-logind[1134]: New session 1 of user core. Dec 13 04:00:00.896220 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 04:00:00.899414 systemd[1]: Starting user@500.service... Dec 13 04:00:00.907057 (systemd)[1210]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:01.051377 systemd[1]: Started kubelet.service. Dec 13 04:00:01.068952 systemd[1210]: Queued start job for default target default.target. Dec 13 04:00:01.070053 systemd[1210]: Reached target paths.target. Dec 13 04:00:01.070072 systemd[1210]: Reached target sockets.target. Dec 13 04:00:01.070086 systemd[1210]: Reached target timers.target. Dec 13 04:00:01.070100 systemd[1210]: Reached target basic.target. Dec 13 04:00:01.070146 systemd[1210]: Reached target default.target. Dec 13 04:00:01.070174 systemd[1210]: Startup finished in 152ms. Dec 13 04:00:01.070364 systemd[1]: Started user@500.service. Dec 13 04:00:01.073857 systemd[1]: Started session-1.scope. Dec 13 04:00:01.439380 systemd[1]: Started sshd@1-172.24.4.115:22-172.24.4.1:49038.service. Dec 13 04:00:02.617915 kubelet[1218]: E1213 04:00:02.617446 1218 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:00:02.622053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:00:02.622405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:00:02.623237 systemd[1]: kubelet.service: Consumed 1.848s CPU time. Dec 13 04:00:03.495987 sshd[1227]: Accepted publickey for core from 172.24.4.1 port 49038 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:03.499810 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:03.511440 systemd-logind[1134]: New session 2 of user core. Dec 13 04:00:03.512953 systemd[1]: Started session-2.scope. Dec 13 04:00:04.141816 systemd[1]: Started sshd@2-172.24.4.115:22-172.24.4.1:49042.service. Dec 13 04:00:04.435373 sshd[1227]: pam_unix(sshd:session): session closed for user core Dec 13 04:00:04.449539 systemd[1]: sshd@1-172.24.4.115:22-172.24.4.1:49038.service: Deactivated successfully. Dec 13 04:00:04.451311 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 04:00:04.452632 systemd-logind[1134]: Session 2 logged out. Waiting for processes to exit. Dec 13 04:00:04.454939 systemd-logind[1134]: Removed session 2. Dec 13 04:00:05.631991 sshd[1233]: Accepted publickey for core from 172.24.4.1 port 49042 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:05.635772 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:05.645792 systemd-logind[1134]: New session 3 of user core. Dec 13 04:00:05.647280 systemd[1]: Started session-3.scope. Dec 13 04:00:05.687327 coreos-metadata[1123]: Dec 13 04:00:05.687 WARN failed to locate config-drive, using the metadata service API instead Dec 13 04:00:05.766264 coreos-metadata[1123]: Dec 13 04:00:05.766 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 04:00:06.133578 coreos-metadata[1123]: Dec 13 04:00:06.133 INFO Fetch successful Dec 13 04:00:06.133578 coreos-metadata[1123]: Dec 13 04:00:06.133 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 04:00:06.151040 coreos-metadata[1123]: Dec 13 04:00:06.150 INFO Fetch successful Dec 13 04:00:06.155147 unknown[1123]: wrote ssh authorized keys file for user: core Dec 13 04:00:06.195879 update-ssh-keys[1239]: Updated "/home/core/.ssh/authorized_keys" Dec 13 04:00:06.196849 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 04:00:06.197639 systemd[1]: Reached target multi-user.target. Dec 13 04:00:06.200985 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 04:00:06.217342 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 04:00:06.217820 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 04:00:06.218839 systemd[1]: Startup finished in 904ms (kernel) + 8.775s (initrd) + 15.666s (userspace) = 25.346s. Dec 13 04:00:06.546984 sshd[1233]: pam_unix(sshd:session): session closed for user core Dec 13 04:00:06.553920 systemd-logind[1134]: Session 3 logged out. Waiting for processes to exit. Dec 13 04:00:06.554656 systemd[1]: sshd@2-172.24.4.115:22-172.24.4.1:49042.service: Deactivated successfully. Dec 13 04:00:06.556548 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 04:00:06.558638 systemd-logind[1134]: Removed session 3. Dec 13 04:00:12.875498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 04:00:12.876306 systemd[1]: Stopped kubelet.service. Dec 13 04:00:12.876403 systemd[1]: kubelet.service: Consumed 1.848s CPU time. Dec 13 04:00:12.880369 systemd[1]: Starting kubelet.service... Dec 13 04:00:13.241424 systemd[1]: Started kubelet.service. Dec 13 04:00:13.801480 kubelet[1247]: E1213 04:00:13.801393 1247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:00:13.810150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:00:13.810447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:00:16.558720 systemd[1]: Started sshd@3-172.24.4.115:22-172.24.4.1:44712.service. Dec 13 04:00:17.722261 sshd[1255]: Accepted publickey for core from 172.24.4.1 port 44712 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:17.725401 sshd[1255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:17.737211 systemd-logind[1134]: New session 4 of user core. Dec 13 04:00:17.738236 systemd[1]: Started session-4.scope. Dec 13 04:00:18.308840 sshd[1255]: pam_unix(sshd:session): session closed for user core Dec 13 04:00:18.316227 systemd[1]: Started sshd@4-172.24.4.115:22-172.24.4.1:44728.service. Dec 13 04:00:18.321487 systemd[1]: sshd@3-172.24.4.115:22-172.24.4.1:44712.service: Deactivated successfully. Dec 13 04:00:18.324344 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 04:00:18.328052 systemd-logind[1134]: Session 4 logged out. Waiting for processes to exit. Dec 13 04:00:18.330632 systemd-logind[1134]: Removed session 4. Dec 13 04:00:19.709307 sshd[1260]: Accepted publickey for core from 172.24.4.1 port 44728 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:19.713006 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:19.723516 systemd-logind[1134]: New session 5 of user core. Dec 13 04:00:19.724494 systemd[1]: Started session-5.scope. Dec 13 04:00:20.309269 sshd[1260]: pam_unix(sshd:session): session closed for user core Dec 13 04:00:20.316124 systemd[1]: Started sshd@5-172.24.4.115:22-172.24.4.1:44738.service. Dec 13 04:00:20.319607 systemd[1]: sshd@4-172.24.4.115:22-172.24.4.1:44728.service: Deactivated successfully. Dec 13 04:00:20.321470 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 04:00:20.325019 systemd-logind[1134]: Session 5 logged out. Waiting for processes to exit. Dec 13 04:00:20.327950 systemd-logind[1134]: Removed session 5. Dec 13 04:00:21.896978 sshd[1266]: Accepted publickey for core from 172.24.4.1 port 44738 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:21.900206 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:21.911835 systemd-logind[1134]: New session 6 of user core. Dec 13 04:00:21.912892 systemd[1]: Started session-6.scope. Dec 13 04:00:22.631652 sshd[1266]: pam_unix(sshd:session): session closed for user core Dec 13 04:00:22.639168 systemd[1]: Started sshd@6-172.24.4.115:22-172.24.4.1:44740.service. Dec 13 04:00:22.644205 systemd[1]: sshd@5-172.24.4.115:22-172.24.4.1:44738.service: Deactivated successfully. Dec 13 04:00:22.645925 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 04:00:22.649273 systemd-logind[1134]: Session 6 logged out. Waiting for processes to exit. Dec 13 04:00:22.651971 systemd-logind[1134]: Removed session 6. Dec 13 04:00:24.003731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 04:00:24.004187 systemd[1]: Stopped kubelet.service. Dec 13 04:00:24.006714 systemd[1]: Starting kubelet.service... Dec 13 04:00:24.272312 systemd[1]: Started kubelet.service. Dec 13 04:00:24.303227 sshd[1272]: Accepted publickey for core from 172.24.4.1 port 44740 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:00:24.305203 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:00:24.315119 systemd[1]: Started session-7.scope. Dec 13 04:00:24.316804 systemd-logind[1134]: New session 7 of user core. Dec 13 04:00:24.339192 kubelet[1279]: E1213 04:00:24.339137 1279 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:00:24.342780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:00:24.343063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:00:24.800920 sudo[1286]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 04:00:24.801473 sudo[1286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 04:00:24.848804 systemd[1]: Starting docker.service... Dec 13 04:00:24.914050 env[1296]: time="2024-12-13T04:00:24.914008609Z" level=info msg="Starting up" Dec 13 04:00:24.917152 env[1296]: time="2024-12-13T04:00:24.917085079Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 04:00:24.917219 env[1296]: time="2024-12-13T04:00:24.917148849Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 04:00:24.917219 env[1296]: time="2024-12-13T04:00:24.917200526Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 04:00:24.917276 env[1296]: time="2024-12-13T04:00:24.917230783Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 04:00:24.920858 env[1296]: time="2024-12-13T04:00:24.920804134Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 04:00:24.920937 env[1296]: time="2024-12-13T04:00:24.920851814Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 04:00:24.920937 env[1296]: time="2024-12-13T04:00:24.920892039Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 04:00:24.920937 env[1296]: time="2024-12-13T04:00:24.920916485Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 04:00:24.930304 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2145313249-merged.mount: Deactivated successfully. Dec 13 04:00:24.975426 env[1296]: time="2024-12-13T04:00:24.975351368Z" level=info msg="Loading containers: start." Dec 13 04:00:25.268730 kernel: Initializing XFRM netlink socket Dec 13 04:00:25.363183 env[1296]: time="2024-12-13T04:00:25.363076055Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 04:00:25.463977 systemd-networkd[971]: docker0: Link UP Dec 13 04:00:25.489169 env[1296]: time="2024-12-13T04:00:25.489105353Z" level=info msg="Loading containers: done." Dec 13 04:00:25.522347 env[1296]: time="2024-12-13T04:00:25.522135251Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 04:00:25.522980 env[1296]: time="2024-12-13T04:00:25.522835574Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 04:00:25.523791 env[1296]: time="2024-12-13T04:00:25.523644461Z" level=info msg="Daemon has completed initialization" Dec 13 04:00:25.577801 systemd[1]: Started docker.service. Dec 13 04:00:25.599147 env[1296]: time="2024-12-13T04:00:25.599066996Z" level=info msg="API listen on /run/docker.sock" Dec 13 04:00:27.549352 env[1142]: time="2024-12-13T04:00:27.549239614Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 04:00:28.252894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196643359.mount: Deactivated successfully. Dec 13 04:00:31.448120 env[1142]: time="2024-12-13T04:00:31.447894621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:31.450669 env[1142]: time="2024-12-13T04:00:31.450571651Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:31.455741 env[1142]: time="2024-12-13T04:00:31.454548709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:31.458818 env[1142]: time="2024-12-13T04:00:31.458762562Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:31.463594 env[1142]: time="2024-12-13T04:00:31.461267369Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 04:00:31.478429 env[1142]: time="2024-12-13T04:00:31.478353380Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 04:00:34.503466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 04:00:34.503769 systemd[1]: Stopped kubelet.service. Dec 13 04:00:34.505639 systemd[1]: Starting kubelet.service... Dec 13 04:00:35.404200 systemd[1]: Started kubelet.service. Dec 13 04:00:35.562801 env[1142]: time="2024-12-13T04:00:35.561090451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:35.569765 env[1142]: time="2024-12-13T04:00:35.569708904Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:35.576071 env[1142]: time="2024-12-13T04:00:35.575964846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:35.580738 env[1142]: time="2024-12-13T04:00:35.580636868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:35.583068 env[1142]: time="2024-12-13T04:00:35.582979341Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 04:00:35.586235 kubelet[1437]: E1213 04:00:35.586159 1437 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:00:35.593352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:00:35.593649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:00:35.612197 env[1142]: time="2024-12-13T04:00:35.612084954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 04:00:38.103438 env[1142]: time="2024-12-13T04:00:38.103188875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:38.107971 env[1142]: time="2024-12-13T04:00:38.107850086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:38.113218 env[1142]: time="2024-12-13T04:00:38.113143953Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:38.120238 env[1142]: time="2024-12-13T04:00:38.120167276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:38.122704 env[1142]: time="2024-12-13T04:00:38.122603053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 04:00:38.146988 env[1142]: time="2024-12-13T04:00:38.146905369Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 04:00:40.627773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789818740.mount: Deactivated successfully. Dec 13 04:00:41.511449 env[1142]: time="2024-12-13T04:00:41.511258151Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:41.514625 env[1142]: time="2024-12-13T04:00:41.514552329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:41.516477 env[1142]: time="2024-12-13T04:00:41.516417947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:41.518378 env[1142]: time="2024-12-13T04:00:41.518324012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:41.518985 env[1142]: time="2024-12-13T04:00:41.518930760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 04:00:41.535083 env[1142]: time="2024-12-13T04:00:41.534995298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 04:00:42.175736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514061370.mount: Deactivated successfully. Dec 13 04:00:43.863776 env[1142]: time="2024-12-13T04:00:43.863629649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:43.866318 env[1142]: time="2024-12-13T04:00:43.866262355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:43.874427 update_engine[1135]: I1213 04:00:43.873786 1135 update_attempter.cc:509] Updating boot flags... Dec 13 04:00:43.877945 env[1142]: time="2024-12-13T04:00:43.877862190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:43.889828 env[1142]: time="2024-12-13T04:00:43.889789168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:43.893982 env[1142]: time="2024-12-13T04:00:43.893945203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 04:00:43.954224 env[1142]: time="2024-12-13T04:00:43.954003978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 04:00:44.529061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340509771.mount: Deactivated successfully. Dec 13 04:00:44.541389 env[1142]: time="2024-12-13T04:00:44.541320895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:44.545434 env[1142]: time="2024-12-13T04:00:44.545381150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:44.548959 env[1142]: time="2024-12-13T04:00:44.548904588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:44.552093 env[1142]: time="2024-12-13T04:00:44.552039146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:44.553703 env[1142]: time="2024-12-13T04:00:44.553575187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 04:00:44.577107 env[1142]: time="2024-12-13T04:00:44.577012532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 04:00:45.192393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627419167.mount: Deactivated successfully. Dec 13 04:00:45.753477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 04:00:45.753723 systemd[1]: Stopped kubelet.service. Dec 13 04:00:45.755317 systemd[1]: Starting kubelet.service... Dec 13 04:00:46.568000 systemd[1]: Started kubelet.service. Dec 13 04:00:46.665404 kubelet[1491]: E1213 04:00:46.665342 1491 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 04:00:46.669452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 04:00:46.669801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 04:00:49.914208 env[1142]: time="2024-12-13T04:00:49.913980216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:49.920893 env[1142]: time="2024-12-13T04:00:49.920774568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:49.927057 env[1142]: time="2024-12-13T04:00:49.926992109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:49.933860 env[1142]: time="2024-12-13T04:00:49.933762998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:49.937017 env[1142]: time="2024-12-13T04:00:49.936905872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 04:00:55.360346 systemd[1]: Stopped kubelet.service. Dec 13 04:00:55.366535 systemd[1]: Starting kubelet.service... Dec 13 04:00:55.407340 systemd[1]: Reloading. Dec 13 04:00:55.486272 /usr/lib/systemd/system-generators/torcx-generator[1589]: time="2024-12-13T04:00:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:00:55.486318 /usr/lib/systemd/system-generators/torcx-generator[1589]: time="2024-12-13T04:00:55Z" level=info msg="torcx already run" Dec 13 04:00:55.605375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:00:55.605644 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:00:55.629454 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:00:55.750154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 04:00:55.750225 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 04:00:55.750864 systemd[1]: Stopped kubelet.service. Dec 13 04:00:55.752821 systemd[1]: Starting kubelet.service... Dec 13 04:00:56.615571 systemd[1]: Started kubelet.service. Dec 13 04:00:56.739617 kubelet[1640]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:00:56.741116 kubelet[1640]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:00:56.741258 kubelet[1640]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:00:56.741595 kubelet[1640]: I1213 04:00:56.741523 1640 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:00:57.510074 kubelet[1640]: I1213 04:00:57.510038 1640 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 04:00:57.510374 kubelet[1640]: I1213 04:00:57.510343 1640 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:00:57.510699 kubelet[1640]: I1213 04:00:57.510684 1640 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 04:00:57.541645 kubelet[1640]: E1213 04:00:57.541544 1640 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.547849 kubelet[1640]: I1213 04:00:57.547774 1640 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:00:57.574090 kubelet[1640]: I1213 04:00:57.573988 1640 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:00:57.574494 kubelet[1640]: I1213 04:00:57.574442 1640 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:00:57.574843 kubelet[1640]: I1213 04:00:57.574793 1640 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 04:00:57.574843 kubelet[1640]: I1213 04:00:57.574843 1640 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:00:57.575209 kubelet[1640]: I1213 04:00:57.574865 1640 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 04:00:57.576368 kubelet[1640]: I1213 04:00:57.576311 1640 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:00:57.576561 kubelet[1640]: I1213 04:00:57.576510 1640 kubelet.go:396] "Attempting to sync node with API server" Dec 13 04:00:57.577196 kubelet[1640]: I1213 04:00:57.577155 1640 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:00:57.577311 kubelet[1640]: I1213 04:00:57.577214 1640 kubelet.go:312] "Adding apiserver pod source" Dec 13 04:00:57.577311 kubelet[1640]: I1213 04:00:57.577234 1640 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:00:57.577955 kubelet[1640]: W1213 04:00:57.577816 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-f-1413c5ec2e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.578314 kubelet[1640]: E1213 04:00:57.578267 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-f-1413c5ec2e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.579777 kubelet[1640]: W1213 04:00:57.579491 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.579777 kubelet[1640]: E1213 04:00:57.579570 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.580557 kubelet[1640]: I1213 04:00:57.580534 1640 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:00:57.595293 kubelet[1640]: I1213 04:00:57.595229 1640 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:00:57.595486 kubelet[1640]: W1213 04:00:57.595437 1640 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 04:00:57.597403 kubelet[1640]: I1213 04:00:57.597353 1640 server.go:1256] "Started kubelet" Dec 13 04:00:57.611259 kubelet[1640]: I1213 04:00:57.611190 1640 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:00:57.612994 kubelet[1640]: I1213 04:00:57.612969 1640 server.go:461] "Adding debug handlers to kubelet server" Dec 13 04:00:57.615229 kubelet[1640]: I1213 04:00:57.615165 1640 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:00:57.615885 kubelet[1640]: I1213 04:00:57.615838 1640 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:00:57.619964 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 04:00:57.620077 kubelet[1640]: E1213 04:00:57.619404 1640 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.115:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.115:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-f-1413c5ec2e.novalocal.1810a09e236d0c19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-f-1413c5ec2e.novalocal,UID:ci-3510-3-6-f-1413c5ec2e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-f-1413c5ec2e.novalocal,},FirstTimestamp:2024-12-13 04:00:57.597266969 +0000 UTC m=+0.967243443,LastTimestamp:2024-12-13 04:00:57.597266969 +0000 UTC m=+0.967243443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-f-1413c5ec2e.novalocal,}" Dec 13 04:00:57.628954 kubelet[1640]: I1213 04:00:57.628738 1640 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:00:57.632500 kubelet[1640]: I1213 04:00:57.632447 1640 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 04:00:57.632963 kubelet[1640]: I1213 04:00:57.632922 1640 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 04:00:57.635181 kubelet[1640]: W1213 04:00:57.635086 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.635311 kubelet[1640]: E1213 04:00:57.635212 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.638645 kubelet[1640]: I1213 04:00:57.638596 1640 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 04:00:57.641904 kubelet[1640]: E1213 04:00:57.641851 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-f-1413c5ec2e.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="200ms" Dec 13 04:00:57.645554 kubelet[1640]: I1213 04:00:57.645506 1640 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:00:57.645554 kubelet[1640]: I1213 04:00:57.645551 1640 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:00:57.645807 kubelet[1640]: I1213 04:00:57.645769 1640 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:00:57.663216 kubelet[1640]: E1213 04:00:57.663163 1640 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:00:57.677354 kubelet[1640]: I1213 04:00:57.677307 1640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:00:57.678776 kubelet[1640]: I1213 04:00:57.678745 1640 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:00:57.678840 kubelet[1640]: I1213 04:00:57.678785 1640 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:00:57.678840 kubelet[1640]: I1213 04:00:57.678813 1640 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 04:00:57.678919 kubelet[1640]: E1213 04:00:57.678869 1640 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 04:00:57.682088 kubelet[1640]: W1213 04:00:57.682062 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.683254 kubelet[1640]: E1213 04:00:57.683239 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:57.684498 kubelet[1640]: I1213 04:00:57.684484 1640 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:00:57.684580 kubelet[1640]: I1213 04:00:57.684569 1640 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:00:57.684689 kubelet[1640]: I1213 04:00:57.684652 1640 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:00:57.689398 kubelet[1640]: I1213 04:00:57.689383 1640 policy_none.go:49] "None policy: Start" Dec 13 04:00:57.690098 kubelet[1640]: I1213 04:00:57.690085 1640 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:00:57.690182 kubelet[1640]: I1213 04:00:57.690172 1640 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:00:57.703784 systemd[1]: Created slice kubepods.slice. Dec 13 04:00:57.708625 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 04:00:57.711900 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 04:00:57.718306 kubelet[1640]: I1213 04:00:57.718269 1640 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:00:57.718552 kubelet[1640]: I1213 04:00:57.718522 1640 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:00:57.721884 kubelet[1640]: E1213 04:00:57.721862 1640 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" not found" Dec 13 04:00:57.734463 kubelet[1640]: I1213 04:00:57.734439 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.735133 kubelet[1640]: E1213 04:00:57.735085 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.781880 kubelet[1640]: I1213 04:00:57.779503 1640 topology_manager.go:215] "Topology Admit Handler" podUID="af0277aa5d36df22383d1045469840d9" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.783036 kubelet[1640]: I1213 04:00:57.783010 1640 topology_manager.go:215] "Topology Admit Handler" podUID="1fd8b27295d41b493d278c847940d9e9" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.788128 kubelet[1640]: I1213 04:00:57.788100 1640 topology_manager.go:215] "Topology Admit Handler" podUID="3cdfd8a7a73c0a9c2a518b36911c3bcf" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.797961 systemd[1]: Created slice kubepods-burstable-podaf0277aa5d36df22383d1045469840d9.slice. Dec 13 04:00:57.816233 systemd[1]: Created slice kubepods-burstable-pod1fd8b27295d41b493d278c847940d9e9.slice. Dec 13 04:00:57.823507 systemd[1]: Created slice kubepods-burstable-pod3cdfd8a7a73c0a9c2a518b36911c3bcf.slice. Dec 13 04:00:57.839694 kubelet[1640]: I1213 04:00:57.839587 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.839964 kubelet[1640]: I1213 04:00:57.839729 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd8b27295d41b493d278c847940d9e9-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"1fd8b27295d41b493d278c847940d9e9\") " pod="kube-system/kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.839964 kubelet[1640]: I1213 04:00:57.839802 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cdfd8a7a73c0a9c2a518b36911c3bcf-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"3cdfd8a7a73c0a9c2a518b36911c3bcf\") " pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.839964 kubelet[1640]: I1213 04:00:57.839867 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.839964 kubelet[1640]: I1213 04:00:57.839939 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.840253 kubelet[1640]: I1213 04:00:57.840019 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cdfd8a7a73c0a9c2a518b36911c3bcf-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"3cdfd8a7a73c0a9c2a518b36911c3bcf\") " pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.840253 kubelet[1640]: I1213 04:00:57.840086 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cdfd8a7a73c0a9c2a518b36911c3bcf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"3cdfd8a7a73c0a9c2a518b36911c3bcf\") " pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.840253 kubelet[1640]: I1213 04:00:57.840149 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.840253 kubelet[1640]: I1213 04:00:57.840225 1640 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.843741 kubelet[1640]: E1213 04:00:57.843700 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-f-1413c5ec2e.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="400ms" Dec 13 04:00:57.940911 kubelet[1640]: I1213 04:00:57.940828 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:57.941977 kubelet[1640]: E1213 04:00:57.941927 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:58.111051 env[1142]: time="2024-12-13T04:00:58.110176491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal,Uid:af0277aa5d36df22383d1045469840d9,Namespace:kube-system,Attempt:0,}" Dec 13 04:00:58.128896 env[1142]: time="2024-12-13T04:00:58.128824075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal,Uid:1fd8b27295d41b493d278c847940d9e9,Namespace:kube-system,Attempt:0,}" Dec 13 04:00:58.131203 env[1142]: time="2024-12-13T04:00:58.131145579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal,Uid:3cdfd8a7a73c0a9c2a518b36911c3bcf,Namespace:kube-system,Attempt:0,}" Dec 13 04:00:58.245498 kubelet[1640]: E1213 04:00:58.245417 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-f-1413c5ec2e.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="800ms" Dec 13 04:00:58.346478 kubelet[1640]: I1213 04:00:58.345872 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:58.346785 kubelet[1640]: E1213 04:00:58.346706 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:58.481790 kubelet[1640]: W1213 04:00:58.481475 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:58.481790 kubelet[1640]: E1213 04:00:58.481576 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:58.696524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621474793.mount: Deactivated successfully. Dec 13 04:00:58.708559 env[1142]: time="2024-12-13T04:00:58.708396638Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.714828 env[1142]: time="2024-12-13T04:00:58.714760533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.717005 env[1142]: time="2024-12-13T04:00:58.716953106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.722345 kubelet[1640]: W1213 04:00:58.722223 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:58.722529 kubelet[1640]: E1213 04:00:58.722407 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:58.722856 env[1142]: time="2024-12-13T04:00:58.722800501Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.729484 env[1142]: time="2024-12-13T04:00:58.729429955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.740311 env[1142]: time="2024-12-13T04:00:58.738988632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.743229 env[1142]: time="2024-12-13T04:00:58.743179542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.746408 env[1142]: time="2024-12-13T04:00:58.746337395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.752467 env[1142]: time="2024-12-13T04:00:58.752374266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.754751 env[1142]: time="2024-12-13T04:00:58.754601203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.756914 env[1142]: time="2024-12-13T04:00:58.756567009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.759951 env[1142]: time="2024-12-13T04:00:58.758935261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:00:58.829074 env[1142]: time="2024-12-13T04:00:58.828927740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:00:58.829074 env[1142]: time="2024-12-13T04:00:58.829013862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:00:58.829074 env[1142]: time="2024-12-13T04:00:58.829064266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:00:58.829456 env[1142]: time="2024-12-13T04:00:58.829223024Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4596c4f6a330cb0de89726178fa0acd0c75b2c24e60d5c0412f813b715beb78 pid=1682 runtime=io.containerd.runc.v2 Dec 13 04:00:58.836549 env[1142]: time="2024-12-13T04:00:58.836452082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:00:58.836549 env[1142]: time="2024-12-13T04:00:58.836514539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:00:58.836858 env[1142]: time="2024-12-13T04:00:58.836531491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:00:58.838867 env[1142]: time="2024-12-13T04:00:58.838786881Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc15a5d78e5a066f1d3899ca81b4a7cd396984cec64c6be9f3909b2b304da1e5 pid=1689 runtime=io.containerd.runc.v2 Dec 13 04:00:58.841606 env[1142]: time="2024-12-13T04:00:58.841528683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:00:58.841735 env[1142]: time="2024-12-13T04:00:58.841609665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:00:58.841735 env[1142]: time="2024-12-13T04:00:58.841635213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:00:58.841898 env[1142]: time="2024-12-13T04:00:58.841857279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f736a1fc23f1f820b21b0e3b566522ead73d4fe62c926d5c813821d78be7d8e9 pid=1702 runtime=io.containerd.runc.v2 Dec 13 04:00:58.853133 systemd[1]: Started cri-containerd-d4596c4f6a330cb0de89726178fa0acd0c75b2c24e60d5c0412f813b715beb78.scope. Dec 13 04:00:58.859085 kubelet[1640]: W1213 04:00:58.858538 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:58.859085 kubelet[1640]: E1213 04:00:58.858580 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:58.874284 systemd[1]: Started cri-containerd-f736a1fc23f1f820b21b0e3b566522ead73d4fe62c926d5c813821d78be7d8e9.scope. Dec 13 04:00:58.894366 systemd[1]: Started cri-containerd-cc15a5d78e5a066f1d3899ca81b4a7cd396984cec64c6be9f3909b2b304da1e5.scope. Dec 13 04:00:58.934091 env[1142]: time="2024-12-13T04:00:58.934026880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal,Uid:af0277aa5d36df22383d1045469840d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4596c4f6a330cb0de89726178fa0acd0c75b2c24e60d5c0412f813b715beb78\"" Dec 13 04:00:58.940511 env[1142]: time="2024-12-13T04:00:58.940459775Z" level=info msg="CreateContainer within sandbox \"d4596c4f6a330cb0de89726178fa0acd0c75b2c24e60d5c0412f813b715beb78\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 04:00:58.967557 env[1142]: time="2024-12-13T04:00:58.967513554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal,Uid:3cdfd8a7a73c0a9c2a518b36911c3bcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc15a5d78e5a066f1d3899ca81b4a7cd396984cec64c6be9f3909b2b304da1e5\"" Dec 13 04:00:58.970708 env[1142]: time="2024-12-13T04:00:58.970617064Z" level=info msg="CreateContainer within sandbox \"d4596c4f6a330cb0de89726178fa0acd0c75b2c24e60d5c0412f813b715beb78\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b1d47afe5f38b9511065a77c06c7cc13f03f1c85250ad1cfcbfa30b5a28dfd00\"" Dec 13 04:00:58.971716 env[1142]: time="2024-12-13T04:00:58.971692762Z" level=info msg="CreateContainer within sandbox \"cc15a5d78e5a066f1d3899ca81b4a7cd396984cec64c6be9f3909b2b304da1e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 04:00:58.971902 env[1142]: time="2024-12-13T04:00:58.971850618Z" level=info msg="StartContainer for \"b1d47afe5f38b9511065a77c06c7cc13f03f1c85250ad1cfcbfa30b5a28dfd00\"" Dec 13 04:00:58.979843 env[1142]: time="2024-12-13T04:00:58.979796359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal,Uid:1fd8b27295d41b493d278c847940d9e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f736a1fc23f1f820b21b0e3b566522ead73d4fe62c926d5c813821d78be7d8e9\"" Dec 13 04:00:58.987868 env[1142]: time="2024-12-13T04:00:58.987826259Z" level=info msg="CreateContainer within sandbox \"f736a1fc23f1f820b21b0e3b566522ead73d4fe62c926d5c813821d78be7d8e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 04:00:59.003342 systemd[1]: Started cri-containerd-b1d47afe5f38b9511065a77c06c7cc13f03f1c85250ad1cfcbfa30b5a28dfd00.scope. Dec 13 04:00:59.009245 env[1142]: time="2024-12-13T04:00:59.009196608Z" level=info msg="CreateContainer within sandbox \"cc15a5d78e5a066f1d3899ca81b4a7cd396984cec64c6be9f3909b2b304da1e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ebcba591d627a169adb205237f2d98550add8fcd158703cfe126afce21c0a0f\"" Dec 13 04:00:59.011650 env[1142]: time="2024-12-13T04:00:59.011255069Z" level=info msg="StartContainer for \"8ebcba591d627a169adb205237f2d98550add8fcd158703cfe126afce21c0a0f\"" Dec 13 04:00:59.036869 env[1142]: time="2024-12-13T04:00:59.036822010Z" level=info msg="CreateContainer within sandbox \"f736a1fc23f1f820b21b0e3b566522ead73d4fe62c926d5c813821d78be7d8e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abf13237b4d1f62578be5d58e498ef72a811fd5ac4e02ff62f3c12879a73e3d9\"" Dec 13 04:00:59.044883 env[1142]: time="2024-12-13T04:00:59.044835037Z" level=info msg="StartContainer for \"abf13237b4d1f62578be5d58e498ef72a811fd5ac4e02ff62f3c12879a73e3d9\"" Dec 13 04:00:59.046119 kubelet[1640]: E1213 04:00:59.046091 1640 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-f-1413c5ec2e.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="1.6s" Dec 13 04:00:59.049853 systemd[1]: Started cri-containerd-8ebcba591d627a169adb205237f2d98550add8fcd158703cfe126afce21c0a0f.scope. Dec 13 04:00:59.070323 systemd[1]: Started cri-containerd-abf13237b4d1f62578be5d58e498ef72a811fd5ac4e02ff62f3c12879a73e3d9.scope. Dec 13 04:00:59.081341 kubelet[1640]: W1213 04:00:59.081264 1640 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-f-1413c5ec2e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:59.081341 kubelet[1640]: E1213 04:00:59.081340 1640 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-f-1413c5ec2e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:00:59.117179 env[1142]: time="2024-12-13T04:00:59.117097173Z" level=info msg="StartContainer for \"b1d47afe5f38b9511065a77c06c7cc13f03f1c85250ad1cfcbfa30b5a28dfd00\" returns successfully" Dec 13 04:00:59.136544 env[1142]: time="2024-12-13T04:00:59.136469365Z" level=info msg="StartContainer for \"8ebcba591d627a169adb205237f2d98550add8fcd158703cfe126afce21c0a0f\" returns successfully" Dec 13 04:00:59.149554 kubelet[1640]: I1213 04:00:59.149493 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:59.149944 kubelet[1640]: E1213 04:00:59.149924 1640 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:00:59.174887 env[1142]: time="2024-12-13T04:00:59.174810774Z" level=info msg="StartContainer for \"abf13237b4d1f62578be5d58e498ef72a811fd5ac4e02ff62f3c12879a73e3d9\" returns successfully" Dec 13 04:00:59.727967 kubelet[1640]: E1213 04:00:59.727931 1640 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.115:6443: connect: connection refused Dec 13 04:01:00.753138 kubelet[1640]: I1213 04:01:00.752995 1640 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:01.996935 kubelet[1640]: E1213 04:01:01.996412 1640 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-f-1413c5ec2e.novalocal\" not found" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:02.034880 kubelet[1640]: I1213 04:01:02.034830 1640 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:02.048846 kubelet[1640]: E1213 04:01:02.048813 1640 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" not found" Dec 13 04:01:02.149260 kubelet[1640]: E1213 04:01:02.149193 1640 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" not found" Dec 13 04:01:02.250432 kubelet[1640]: E1213 04:01:02.250171 1640 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" not found" Dec 13 04:01:02.581485 kubelet[1640]: I1213 04:01:02.581357 1640 apiserver.go:52] "Watching apiserver" Dec 13 04:01:02.633319 kubelet[1640]: I1213 04:01:02.633176 1640 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 04:01:05.249523 kubelet[1640]: W1213 04:01:05.249440 1640 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:01:05.451242 systemd[1]: Reloading. Dec 13 04:01:05.593233 /usr/lib/systemd/system-generators/torcx-generator[1931]: time="2024-12-13T04:01:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 04:01:05.593705 /usr/lib/systemd/system-generators/torcx-generator[1931]: time="2024-12-13T04:01:05Z" level=info msg="torcx already run" Dec 13 04:01:05.672292 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 04:01:05.672314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 04:01:05.697960 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 04:01:05.840050 systemd[1]: Stopping kubelet.service... Dec 13 04:01:05.855650 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 04:01:05.856018 systemd[1]: Stopped kubelet.service. Dec 13 04:01:05.856142 systemd[1]: kubelet.service: Consumed 1.721s CPU time. Dec 13 04:01:05.858712 systemd[1]: Starting kubelet.service... Dec 13 04:01:08.041624 systemd[1]: Started kubelet.service. Dec 13 04:01:08.182281 kubelet[1973]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:01:08.182281 kubelet[1973]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 04:01:08.182281 kubelet[1973]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 04:01:08.183292 kubelet[1973]: I1213 04:01:08.182390 1973 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 04:01:08.190891 kubelet[1973]: I1213 04:01:08.190815 1973 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 04:01:08.190891 kubelet[1973]: I1213 04:01:08.190844 1973 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 04:01:08.191318 kubelet[1973]: I1213 04:01:08.191217 1973 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 04:01:08.195704 kubelet[1973]: I1213 04:01:08.194815 1973 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 04:01:08.202151 kubelet[1973]: I1213 04:01:08.202129 1973 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 04:01:08.216241 kubelet[1973]: I1213 04:01:08.215918 1973 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 04:01:08.216241 kubelet[1973]: I1213 04:01:08.216190 1973 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 04:01:08.216400 kubelet[1973]: I1213 04:01:08.216374 1973 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 04:01:08.216495 kubelet[1973]: I1213 04:01:08.216419 1973 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 04:01:08.216495 kubelet[1973]: I1213 04:01:08.216433 1973 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 04:01:08.216708 kubelet[1973]: I1213 04:01:08.216622 1973 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:01:08.216775 kubelet[1973]: I1213 04:01:08.216759 1973 kubelet.go:396] "Attempting to sync node with API server" Dec 13 04:01:08.216816 kubelet[1973]: I1213 04:01:08.216781 1973 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 04:01:08.222128 kubelet[1973]: I1213 04:01:08.222094 1973 kubelet.go:312] "Adding apiserver pod source" Dec 13 04:01:08.222332 kubelet[1973]: I1213 04:01:08.222304 1973 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 04:01:08.230581 kubelet[1973]: I1213 04:01:08.229907 1973 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 04:01:08.230581 kubelet[1973]: I1213 04:01:08.230112 1973 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 04:01:08.230187 sudo[1987]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 04:01:08.230964 kubelet[1973]: I1213 04:01:08.230699 1973 server.go:1256] "Started kubelet" Dec 13 04:01:08.230448 sudo[1987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 04:01:08.235821 kubelet[1973]: I1213 04:01:08.235797 1973 apiserver.go:52] "Watching apiserver" Dec 13 04:01:08.237821 kubelet[1973]: I1213 04:01:08.237803 1973 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 04:01:08.245696 kubelet[1973]: E1213 04:01:08.245645 1973 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 04:01:08.255055 kubelet[1973]: I1213 04:01:08.238601 1973 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 04:01:08.256473 kubelet[1973]: I1213 04:01:08.256455 1973 server.go:461] "Adding debug handlers to kubelet server" Dec 13 04:01:08.257982 kubelet[1973]: I1213 04:01:08.238676 1973 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 04:01:08.258256 kubelet[1973]: I1213 04:01:08.258242 1973 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 04:01:08.260859 kubelet[1973]: I1213 04:01:08.260844 1973 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 04:01:08.266110 kubelet[1973]: I1213 04:01:08.266089 1973 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 04:01:08.266435 kubelet[1973]: I1213 04:01:08.266424 1973 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 04:01:08.269034 kubelet[1973]: I1213 04:01:08.269021 1973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 04:01:08.269637 kubelet[1973]: I1213 04:01:08.269603 1973 factory.go:221] Registration of the systemd container factory successfully Dec 13 04:01:08.270054 kubelet[1973]: I1213 04:01:08.270041 1973 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 04:01:08.270141 kubelet[1973]: I1213 04:01:08.270131 1973 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 04:01:08.270225 kubelet[1973]: I1213 04:01:08.270214 1973 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 04:01:08.270352 kubelet[1973]: E1213 04:01:08.270339 1973 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 04:01:08.270808 kubelet[1973]: I1213 04:01:08.270776 1973 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 04:01:08.279586 kubelet[1973]: I1213 04:01:08.279555 1973 factory.go:221] Registration of the containerd container factory successfully Dec 13 04:01:08.330045 kubelet[1973]: I1213 04:01:08.329651 1973 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 04:01:08.330045 kubelet[1973]: I1213 04:01:08.329772 1973 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 04:01:08.330045 kubelet[1973]: I1213 04:01:08.329814 1973 state_mem.go:36] "Initialized new in-memory state store" Dec 13 04:01:08.330302 kubelet[1973]: I1213 04:01:08.330075 1973 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 04:01:08.330302 kubelet[1973]: I1213 04:01:08.330107 1973 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 04:01:08.330302 kubelet[1973]: I1213 04:01:08.330116 1973 policy_none.go:49] "None policy: Start" Dec 13 04:01:08.334713 kubelet[1973]: I1213 04:01:08.333564 1973 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 04:01:08.334713 kubelet[1973]: I1213 04:01:08.333611 1973 state_mem.go:35] "Initializing new in-memory state store" Dec 13 04:01:08.334713 kubelet[1973]: I1213 04:01:08.333860 1973 state_mem.go:75] "Updated machine memory state" Dec 13 04:01:08.342475 kubelet[1973]: I1213 04:01:08.342453 1973 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 04:01:08.353688 kubelet[1973]: I1213 04:01:08.353626 1973 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 04:01:08.371614 kubelet[1973]: I1213 04:01:08.371582 1973 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.372232 kubelet[1973]: I1213 04:01:08.372215 1973 topology_manager.go:215] "Topology Admit Handler" podUID="3cdfd8a7a73c0a9c2a518b36911c3bcf" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.372420 kubelet[1973]: I1213 04:01:08.372407 1973 topology_manager.go:215] "Topology Admit Handler" podUID="af0277aa5d36df22383d1045469840d9" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.372693 kubelet[1973]: I1213 04:01:08.372633 1973 topology_manager.go:215] "Topology Admit Handler" podUID="1fd8b27295d41b493d278c847940d9e9" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.401401 kubelet[1973]: I1213 04:01:08.401367 1973 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.401742 kubelet[1973]: I1213 04:01:08.401729 1973 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.403814 kubelet[1973]: W1213 04:01:08.403799 1973 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:01:08.403961 kubelet[1973]: W1213 04:01:08.403950 1973 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 04:01:08.439308 kubelet[1973]: I1213 04:01:08.439283 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" podStartSLOduration=3.439207268 podStartE2EDuration="3.439207268s" podCreationTimestamp="2024-12-13 04:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:01:08.438846542 +0000 UTC m=+0.355589504" watchObservedRunningTime="2024-12-13 04:01:08.439207268 +0000 UTC m=+0.355950220" Dec 13 04:01:08.459150 kubelet[1973]: I1213 04:01:08.459123 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" podStartSLOduration=0.459078295 podStartE2EDuration="459.078295ms" podCreationTimestamp="2024-12-13 04:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:01:08.450588112 +0000 UTC m=+0.367331094" watchObservedRunningTime="2024-12-13 04:01:08.459078295 +0000 UTC m=+0.375821257" Dec 13 04:01:08.467197 kubelet[1973]: I1213 04:01:08.467182 1973 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 04:01:08.470616 kubelet[1973]: I1213 04:01:08.470601 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.470798 kubelet[1973]: I1213 04:01:08.470785 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.470922 kubelet[1973]: I1213 04:01:08.470909 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cdfd8a7a73c0a9c2a518b36911c3bcf-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"3cdfd8a7a73c0a9c2a518b36911c3bcf\") " pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.471038 kubelet[1973]: I1213 04:01:08.471026 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cdfd8a7a73c0a9c2a518b36911c3bcf-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"3cdfd8a7a73c0a9c2a518b36911c3bcf\") " pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.471160 kubelet[1973]: I1213 04:01:08.471148 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cdfd8a7a73c0a9c2a518b36911c3bcf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"3cdfd8a7a73c0a9c2a518b36911c3bcf\") " pod="kube-system/kube-apiserver-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.471275 kubelet[1973]: I1213 04:01:08.471264 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.471409 kubelet[1973]: I1213 04:01:08.471398 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.471530 kubelet[1973]: I1213 04:01:08.471519 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af0277aa5d36df22383d1045469840d9-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"af0277aa5d36df22383d1045469840d9\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.471644 kubelet[1973]: I1213 04:01:08.471633 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd8b27295d41b493d278c847940d9e9-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal\" (UID: \"1fd8b27295d41b493d278c847940d9e9\") " pod="kube-system/kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal" Dec 13 04:01:08.710104 kubelet[1973]: I1213 04:01:08.709988 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-f-1413c5ec2e.novalocal" podStartSLOduration=0.709909395 podStartE2EDuration="709.909395ms" podCreationTimestamp="2024-12-13 04:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:01:08.459808064 +0000 UTC m=+0.376551026" watchObservedRunningTime="2024-12-13 04:01:08.709909395 +0000 UTC m=+0.626652397" Dec 13 04:01:09.423897 sudo[1987]: pam_unix(sudo:session): session closed for user root Dec 13 04:01:12.255179 sudo[1286]: pam_unix(sudo:session): session closed for user root Dec 13 04:01:12.536576 sshd[1272]: pam_unix(sshd:session): session closed for user core Dec 13 04:01:12.542228 systemd[1]: sshd@6-172.24.4.115:22-172.24.4.1:44740.service: Deactivated successfully. Dec 13 04:01:12.544051 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 04:01:12.544418 systemd[1]: session-7.scope: Consumed 8.993s CPU time. Dec 13 04:01:12.545616 systemd-logind[1134]: Session 7 logged out. Waiting for processes to exit. Dec 13 04:01:12.548058 systemd-logind[1134]: Removed session 7. Dec 13 04:01:18.796636 kubelet[1973]: I1213 04:01:18.796556 1973 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 04:01:18.798731 env[1142]: time="2024-12-13T04:01:18.798531093Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 04:01:18.799999 kubelet[1973]: I1213 04:01:18.799970 1973 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 04:01:19.607128 kubelet[1973]: I1213 04:01:19.607093 1973 topology_manager.go:215] "Topology Admit Handler" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" podNamespace="kube-system" podName="cilium-x52vp" Dec 13 04:01:19.609438 kubelet[1973]: I1213 04:01:19.609408 1973 topology_manager.go:215] "Topology Admit Handler" podUID="1847d792-d236-477e-b0e0-f6dd66f2a920" podNamespace="kube-system" podName="kube-proxy-mnkg7" Dec 13 04:01:19.615855 systemd[1]: Created slice kubepods-burstable-pod1a34faea_a500_4bcb_85ce_7b85a42b01e0.slice. Dec 13 04:01:19.621486 systemd[1]: Created slice kubepods-besteffort-pod1847d792_d236_477e_b0e0_f6dd66f2a920.slice. Dec 13 04:01:19.627235 kubelet[1973]: W1213 04:01:19.627175 1973 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.627235 kubelet[1973]: E1213 04:01:19.627237 1973 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.627519 kubelet[1973]: W1213 04:01:19.627470 1973 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.627519 kubelet[1973]: E1213 04:01:19.627494 1973 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.627804 kubelet[1973]: W1213 04:01:19.627784 1973 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.641518 kubelet[1973]: E1213 04:01:19.627874 1973 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.641518 kubelet[1973]: W1213 04:01:19.627924 1973 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.641518 kubelet[1973]: E1213 04:01:19.627937 1973 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.641518 kubelet[1973]: W1213 04:01:19.627971 1973 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.641518 kubelet[1973]: E1213 04:01:19.627990 1973 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-6-f-1413c5ec2e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-f-1413c5ec2e.novalocal' and this object Dec 13 04:01:19.646188 kubelet[1973]: I1213 04:01:19.646158 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbs2\" (UniqueName: \"kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-kube-api-access-ccbs2\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.646412 kubelet[1973]: I1213 04:01:19.646399 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgx2\" (UniqueName: \"kubernetes.io/projected/1847d792-d236-477e-b0e0-f6dd66f2a920-kube-api-access-kbgx2\") pod \"kube-proxy-mnkg7\" (UID: \"1847d792-d236-477e-b0e0-f6dd66f2a920\") " pod="kube-system/kube-proxy-mnkg7" Dec 13 04:01:19.646537 kubelet[1973]: I1213 04:01:19.646523 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-kernel\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.646649 kubelet[1973]: I1213 04:01:19.646637 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1847d792-d236-477e-b0e0-f6dd66f2a920-kube-proxy\") pod \"kube-proxy-mnkg7\" (UID: \"1847d792-d236-477e-b0e0-f6dd66f2a920\") " pod="kube-system/kube-proxy-mnkg7" Dec 13 04:01:19.646788 kubelet[1973]: I1213 04:01:19.646773 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-net\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.646894 kubelet[1973]: I1213 04:01:19.646882 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1847d792-d236-477e-b0e0-f6dd66f2a920-lib-modules\") pod \"kube-proxy-mnkg7\" (UID: \"1847d792-d236-477e-b0e0-f6dd66f2a920\") " pod="kube-system/kube-proxy-mnkg7" Dec 13 04:01:19.647056 kubelet[1973]: I1213 04:01:19.647044 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-bpf-maps\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.647169 kubelet[1973]: I1213 04:01:19.647157 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1847d792-d236-477e-b0e0-f6dd66f2a920-xtables-lock\") pod \"kube-proxy-mnkg7\" (UID: \"1847d792-d236-477e-b0e0-f6dd66f2a920\") " pod="kube-system/kube-proxy-mnkg7" Dec 13 04:01:19.647277 kubelet[1973]: I1213 04:01:19.647266 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cni-path\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.647405 kubelet[1973]: I1213 04:01:19.647387 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-lib-modules\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.647519 kubelet[1973]: I1213 04:01:19.647505 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-xtables-lock\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.647697 kubelet[1973]: I1213 04:01:19.647684 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hostproc\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.647812 kubelet[1973]: I1213 04:01:19.647799 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-config-path\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.647921 kubelet[1973]: I1213 04:01:19.647909 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hubble-tls\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.648030 kubelet[1973]: I1213 04:01:19.648018 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a34faea-a500-4bcb-85ce-7b85a42b01e0-clustermesh-secrets\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.648133 kubelet[1973]: I1213 04:01:19.648121 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-run\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.648241 kubelet[1973]: I1213 04:01:19.648229 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-cgroup\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.648348 kubelet[1973]: I1213 04:01:19.648335 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-etc-cni-netd\") pod \"cilium-x52vp\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " pod="kube-system/cilium-x52vp" Dec 13 04:01:19.911753 kubelet[1973]: I1213 04:01:19.911604 1973 topology_manager.go:215] "Topology Admit Handler" podUID="6686003e-0404-4e8c-bfda-9b230d216233" podNamespace="kube-system" podName="cilium-operator-5cc964979-vfkkl" Dec 13 04:01:19.921430 systemd[1]: Created slice kubepods-besteffort-pod6686003e_0404_4e8c_bfda_9b230d216233.slice. Dec 13 04:01:19.950733 kubelet[1973]: I1213 04:01:19.950691 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzxp\" (UniqueName: \"kubernetes.io/projected/6686003e-0404-4e8c-bfda-9b230d216233-kube-api-access-whzxp\") pod \"cilium-operator-5cc964979-vfkkl\" (UID: \"6686003e-0404-4e8c-bfda-9b230d216233\") " pod="kube-system/cilium-operator-5cc964979-vfkkl" Dec 13 04:01:19.951051 kubelet[1973]: I1213 04:01:19.951028 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6686003e-0404-4e8c-bfda-9b230d216233-cilium-config-path\") pod \"cilium-operator-5cc964979-vfkkl\" (UID: \"6686003e-0404-4e8c-bfda-9b230d216233\") " pod="kube-system/cilium-operator-5cc964979-vfkkl" Dec 13 04:01:20.751120 kubelet[1973]: E1213 04:01:20.751072 1973 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 04:01:20.753570 kubelet[1973]: E1213 04:01:20.751174 1973 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a34faea-a500-4bcb-85ce-7b85a42b01e0-clustermesh-secrets podName:1a34faea-a500-4bcb-85ce-7b85a42b01e0 nodeName:}" failed. No retries permitted until 2024-12-13 04:01:21.251150645 +0000 UTC m=+13.167893597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1a34faea-a500-4bcb-85ce-7b85a42b01e0-clustermesh-secrets") pod "cilium-x52vp" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0") : failed to sync secret cache: timed out waiting for the condition Dec 13 04:01:20.838578 kubelet[1973]: E1213 04:01:20.838523 1973 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 04:01:20.838578 kubelet[1973]: E1213 04:01:20.838566 1973 projected.go:200] Error preparing data for projected volume kube-api-access-ccbs2 for pod kube-system/cilium-x52vp: failed to sync configmap cache: timed out waiting for the condition Dec 13 04:01:20.838811 kubelet[1973]: E1213 04:01:20.838649 1973 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-kube-api-access-ccbs2 podName:1a34faea-a500-4bcb-85ce-7b85a42b01e0 nodeName:}" failed. No retries permitted until 2024-12-13 04:01:21.338624782 +0000 UTC m=+13.255367744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ccbs2" (UniqueName: "kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-kube-api-access-ccbs2") pod "cilium-x52vp" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0") : failed to sync configmap cache: timed out waiting for the condition Dec 13 04:01:20.853639 kubelet[1973]: E1213 04:01:20.853598 1973 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 04:01:20.853639 kubelet[1973]: E1213 04:01:20.853631 1973 projected.go:200] Error preparing data for projected volume kube-api-access-kbgx2 for pod kube-system/kube-proxy-mnkg7: failed to sync configmap cache: timed out waiting for the condition Dec 13 04:01:20.853891 kubelet[1973]: E1213 04:01:20.853727 1973 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1847d792-d236-477e-b0e0-f6dd66f2a920-kube-api-access-kbgx2 podName:1847d792-d236-477e-b0e0-f6dd66f2a920 nodeName:}" failed. No retries permitted until 2024-12-13 04:01:21.353704183 +0000 UTC m=+13.270447135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kbgx2" (UniqueName: "kubernetes.io/projected/1847d792-d236-477e-b0e0-f6dd66f2a920-kube-api-access-kbgx2") pod "kube-proxy-mnkg7" (UID: "1847d792-d236-477e-b0e0-f6dd66f2a920") : failed to sync configmap cache: timed out waiting for the condition Dec 13 04:01:21.127400 env[1142]: time="2024-12-13T04:01:21.127263993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vfkkl,Uid:6686003e-0404-4e8c-bfda-9b230d216233,Namespace:kube-system,Attempt:0,}" Dec 13 04:01:21.179763 env[1142]: time="2024-12-13T04:01:21.178786343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:01:21.179763 env[1142]: time="2024-12-13T04:01:21.178879297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:01:21.179763 env[1142]: time="2024-12-13T04:01:21.178911177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:01:21.179763 env[1142]: time="2024-12-13T04:01:21.179247848Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a pid=2050 runtime=io.containerd.runc.v2 Dec 13 04:01:21.235173 systemd[1]: Started cri-containerd-2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a.scope. Dec 13 04:01:21.302958 env[1142]: time="2024-12-13T04:01:21.302883917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vfkkl,Uid:6686003e-0404-4e8c-bfda-9b230d216233,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\"" Dec 13 04:01:21.306453 env[1142]: time="2024-12-13T04:01:21.306419929Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 04:01:21.423478 env[1142]: time="2024-12-13T04:01:21.423310135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x52vp,Uid:1a34faea-a500-4bcb-85ce-7b85a42b01e0,Namespace:kube-system,Attempt:0,}" Dec 13 04:01:21.444542 env[1142]: time="2024-12-13T04:01:21.444477935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mnkg7,Uid:1847d792-d236-477e-b0e0-f6dd66f2a920,Namespace:kube-system,Attempt:0,}" Dec 13 04:01:21.479323 env[1142]: time="2024-12-13T04:01:21.479221106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:01:21.479323 env[1142]: time="2024-12-13T04:01:21.479327506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:01:21.479826 env[1142]: time="2024-12-13T04:01:21.479360488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:01:21.480184 env[1142]: time="2024-12-13T04:01:21.480081620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b pid=2097 runtime=io.containerd.runc.v2 Dec 13 04:01:21.500614 systemd[1]: Started cri-containerd-e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b.scope. Dec 13 04:01:21.523704 env[1142]: time="2024-12-13T04:01:21.523234051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:01:21.523704 env[1142]: time="2024-12-13T04:01:21.523281891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:01:21.523704 env[1142]: time="2024-12-13T04:01:21.523296248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:01:21.523704 env[1142]: time="2024-12-13T04:01:21.523473821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c423ac0ef36cbc9afbba5f408022986a0fd8c311f6c1e53314bc9245d9b6f5b1 pid=2121 runtime=io.containerd.runc.v2 Dec 13 04:01:21.560402 env[1142]: time="2024-12-13T04:01:21.560308845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x52vp,Uid:1a34faea-a500-4bcb-85ce-7b85a42b01e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\"" Dec 13 04:01:21.573101 systemd[1]: run-containerd-runc-k8s.io-c423ac0ef36cbc9afbba5f408022986a0fd8c311f6c1e53314bc9245d9b6f5b1-runc.JY0Lfd.mount: Deactivated successfully. Dec 13 04:01:21.575155 systemd[1]: Started cri-containerd-c423ac0ef36cbc9afbba5f408022986a0fd8c311f6c1e53314bc9245d9b6f5b1.scope. Dec 13 04:01:21.600022 env[1142]: time="2024-12-13T04:01:21.599980459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mnkg7,Uid:1847d792-d236-477e-b0e0-f6dd66f2a920,Namespace:kube-system,Attempt:0,} returns sandbox id \"c423ac0ef36cbc9afbba5f408022986a0fd8c311f6c1e53314bc9245d9b6f5b1\"" Dec 13 04:01:21.604711 env[1142]: time="2024-12-13T04:01:21.604606485Z" level=info msg="CreateContainer within sandbox \"c423ac0ef36cbc9afbba5f408022986a0fd8c311f6c1e53314bc9245d9b6f5b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 04:01:21.665002 env[1142]: time="2024-12-13T04:01:21.664930462Z" level=info msg="CreateContainer within sandbox \"c423ac0ef36cbc9afbba5f408022986a0fd8c311f6c1e53314bc9245d9b6f5b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26715bd6d9e382c8fe7419b9b63bbc593620aef156b2d1f2bf520d519aabb4c6\"" Dec 13 04:01:21.666648 env[1142]: time="2024-12-13T04:01:21.666596947Z" level=info msg="StartContainer for \"26715bd6d9e382c8fe7419b9b63bbc593620aef156b2d1f2bf520d519aabb4c6\"" Dec 13 04:01:21.705392 systemd[1]: Started cri-containerd-26715bd6d9e382c8fe7419b9b63bbc593620aef156b2d1f2bf520d519aabb4c6.scope. Dec 13 04:01:21.752394 env[1142]: time="2024-12-13T04:01:21.752165822Z" level=info msg="StartContainer for \"26715bd6d9e382c8fe7419b9b63bbc593620aef156b2d1f2bf520d519aabb4c6\" returns successfully" Dec 13 04:01:22.516345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019413616.mount: Deactivated successfully. Dec 13 04:01:24.792994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374533821.mount: Deactivated successfully. Dec 13 04:01:27.974328 env[1142]: time="2024-12-13T04:01:27.974042704Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:01:27.977891 env[1142]: time="2024-12-13T04:01:27.977867578Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:01:27.983365 env[1142]: time="2024-12-13T04:01:27.983321507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:01:27.984430 env[1142]: time="2024-12-13T04:01:27.984401763Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 04:01:27.986989 env[1142]: time="2024-12-13T04:01:27.986955863Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 04:01:27.992803 env[1142]: time="2024-12-13T04:01:27.992310456Z" level=info msg="CreateContainer within sandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 04:01:28.029514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262119730.mount: Deactivated successfully. Dec 13 04:01:28.037203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2487430985.mount: Deactivated successfully. Dec 13 04:01:28.048956 env[1142]: time="2024-12-13T04:01:28.048845257Z" level=info msg="CreateContainer within sandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\"" Dec 13 04:01:28.051012 env[1142]: time="2024-12-13T04:01:28.050986212Z" level=info msg="StartContainer for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\"" Dec 13 04:01:28.096894 systemd[1]: Started cri-containerd-e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f.scope. Dec 13 04:01:28.146210 env[1142]: time="2024-12-13T04:01:28.146108613Z" level=info msg="StartContainer for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" returns successfully" Dec 13 04:01:28.316249 kubelet[1973]: I1213 04:01:28.316215 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mnkg7" podStartSLOduration=9.316170173 podStartE2EDuration="9.316170173s" podCreationTimestamp="2024-12-13 04:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:01:22.351308299 +0000 UTC m=+14.268051302" watchObservedRunningTime="2024-12-13 04:01:28.316170173 +0000 UTC m=+20.232913135" Dec 13 04:01:36.057346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606050152.mount: Deactivated successfully. Dec 13 04:01:41.494807 env[1142]: time="2024-12-13T04:01:41.494601336Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:01:41.504770 env[1142]: time="2024-12-13T04:01:41.502851072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:01:41.511008 env[1142]: time="2024-12-13T04:01:41.510919539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 04:01:41.514459 env[1142]: time="2024-12-13T04:01:41.514366553Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 04:01:41.536634 env[1142]: time="2024-12-13T04:01:41.536529454Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:01:41.602435 env[1142]: time="2024-12-13T04:01:41.602347832Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599\"" Dec 13 04:01:41.607146 env[1142]: time="2024-12-13T04:01:41.607078195Z" level=info msg="StartContainer for \"cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599\"" Dec 13 04:01:41.669221 systemd[1]: Started cri-containerd-cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599.scope. Dec 13 04:01:41.710304 env[1142]: time="2024-12-13T04:01:41.710241289Z" level=info msg="StartContainer for \"cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599\" returns successfully" Dec 13 04:01:41.722084 systemd[1]: cri-containerd-cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599.scope: Deactivated successfully. Dec 13 04:01:42.256572 env[1142]: time="2024-12-13T04:01:42.256445473Z" level=info msg="shim disconnected" id=cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599 Dec 13 04:01:42.257168 env[1142]: time="2024-12-13T04:01:42.257093074Z" level=warning msg="cleaning up after shim disconnected" id=cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599 namespace=k8s.io Dec 13 04:01:42.257366 env[1142]: time="2024-12-13T04:01:42.257326851Z" level=info msg="cleaning up dead shim" Dec 13 04:01:42.282923 env[1142]: time="2024-12-13T04:01:42.282854601Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:01:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2416 runtime=io.containerd.runc.v2\n" Dec 13 04:01:42.588360 systemd[1]: run-containerd-runc-k8s.io-cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599-runc.BACDVX.mount: Deactivated successfully. Dec 13 04:01:42.588578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599-rootfs.mount: Deactivated successfully. Dec 13 04:01:42.644070 env[1142]: time="2024-12-13T04:01:42.643989494Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:01:42.695476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849627680.mount: Deactivated successfully. Dec 13 04:01:42.698290 kubelet[1973]: I1213 04:01:42.697926 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-vfkkl" podStartSLOduration=17.016943132 podStartE2EDuration="23.69781724s" podCreationTimestamp="2024-12-13 04:01:19 +0000 UTC" firstStartedPulling="2024-12-13 04:01:21.304258344 +0000 UTC m=+13.221001297" lastFinishedPulling="2024-12-13 04:01:27.985132383 +0000 UTC m=+19.901875405" observedRunningTime="2024-12-13 04:01:28.415555149 +0000 UTC m=+20.332298101" watchObservedRunningTime="2024-12-13 04:01:42.69781724 +0000 UTC m=+34.614560323" Dec 13 04:01:42.717322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953719296.mount: Deactivated successfully. Dec 13 04:01:42.728235 env[1142]: time="2024-12-13T04:01:42.728164949Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65\"" Dec 13 04:01:42.729038 env[1142]: time="2024-12-13T04:01:42.729000900Z" level=info msg="StartContainer for \"8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65\"" Dec 13 04:01:42.746846 systemd[1]: Started cri-containerd-8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65.scope. Dec 13 04:01:42.786380 env[1142]: time="2024-12-13T04:01:42.786312999Z" level=info msg="StartContainer for \"8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65\" returns successfully" Dec 13 04:01:42.796856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 04:01:42.797110 systemd[1]: Stopped systemd-sysctl.service. Dec 13 04:01:42.797788 systemd[1]: Stopping systemd-sysctl.service... Dec 13 04:01:42.800992 systemd[1]: Starting systemd-sysctl.service... Dec 13 04:01:42.801250 systemd[1]: cri-containerd-8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65.scope: Deactivated successfully. Dec 13 04:01:42.836112 systemd[1]: Finished systemd-sysctl.service. Dec 13 04:01:42.839844 env[1142]: time="2024-12-13T04:01:42.839558761Z" level=info msg="shim disconnected" id=8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65 Dec 13 04:01:42.839844 env[1142]: time="2024-12-13T04:01:42.839643914Z" level=warning msg="cleaning up after shim disconnected" id=8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65 namespace=k8s.io Dec 13 04:01:42.839844 env[1142]: time="2024-12-13T04:01:42.839704529Z" level=info msg="cleaning up dead shim" Dec 13 04:01:42.850588 env[1142]: time="2024-12-13T04:01:42.850526211Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:01:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2483 runtime=io.containerd.runc.v2\n" Dec 13 04:01:43.659471 env[1142]: time="2024-12-13T04:01:43.659387933Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:01:43.752026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448364318.mount: Deactivated successfully. Dec 13 04:01:43.770012 env[1142]: time="2024-12-13T04:01:43.769838982Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b\"" Dec 13 04:01:43.773746 env[1142]: time="2024-12-13T04:01:43.771623166Z" level=info msg="StartContainer for \"1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b\"" Dec 13 04:01:43.804486 systemd[1]: Started cri-containerd-1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b.scope. Dec 13 04:01:43.859012 env[1142]: time="2024-12-13T04:01:43.858963379Z" level=info msg="StartContainer for \"1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b\" returns successfully" Dec 13 04:01:43.867160 systemd[1]: cri-containerd-1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b.scope: Deactivated successfully. Dec 13 04:01:43.902918 env[1142]: time="2024-12-13T04:01:43.902849964Z" level=info msg="shim disconnected" id=1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b Dec 13 04:01:43.903225 env[1142]: time="2024-12-13T04:01:43.903205965Z" level=warning msg="cleaning up after shim disconnected" id=1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b namespace=k8s.io Dec 13 04:01:43.903312 env[1142]: time="2024-12-13T04:01:43.903297260Z" level=info msg="cleaning up dead shim" Dec 13 04:01:43.912726 env[1142]: time="2024-12-13T04:01:43.912054589Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:01:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2543 runtime=io.containerd.runc.v2\n" Dec 13 04:01:44.587303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b-rootfs.mount: Deactivated successfully. Dec 13 04:01:44.687211 env[1142]: time="2024-12-13T04:01:44.686451351Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:01:44.729316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041669077.mount: Deactivated successfully. Dec 13 04:01:44.749589 env[1142]: time="2024-12-13T04:01:44.749530851Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17\"" Dec 13 04:01:44.750024 env[1142]: time="2024-12-13T04:01:44.749993947Z" level=info msg="StartContainer for \"3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17\"" Dec 13 04:01:44.772973 systemd[1]: Started cri-containerd-3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17.scope. Dec 13 04:01:44.811117 env[1142]: time="2024-12-13T04:01:44.811024759Z" level=info msg="StartContainer for \"3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17\" returns successfully" Dec 13 04:01:44.812639 systemd[1]: cri-containerd-3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17.scope: Deactivated successfully. Dec 13 04:01:44.851086 env[1142]: time="2024-12-13T04:01:44.850365534Z" level=info msg="shim disconnected" id=3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17 Dec 13 04:01:44.851086 env[1142]: time="2024-12-13T04:01:44.850832948Z" level=warning msg="cleaning up after shim disconnected" id=3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17 namespace=k8s.io Dec 13 04:01:44.851086 env[1142]: time="2024-12-13T04:01:44.850850231Z" level=info msg="cleaning up dead shim" Dec 13 04:01:44.860460 env[1142]: time="2024-12-13T04:01:44.860409875Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:01:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2602 runtime=io.containerd.runc.v2\n" Dec 13 04:01:45.588005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17-rootfs.mount: Deactivated successfully. Dec 13 04:01:45.721834 env[1142]: time="2024-12-13T04:01:45.720433626Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:01:45.789779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580737719.mount: Deactivated successfully. Dec 13 04:01:45.801317 env[1142]: time="2024-12-13T04:01:45.801190316Z" level=info msg="CreateContainer within sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\"" Dec 13 04:01:45.803062 env[1142]: time="2024-12-13T04:01:45.802969217Z" level=info msg="StartContainer for \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\"" Dec 13 04:01:45.858511 systemd[1]: Started cri-containerd-93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883.scope. Dec 13 04:01:45.911980 env[1142]: time="2024-12-13T04:01:45.911898323Z" level=info msg="StartContainer for \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\" returns successfully" Dec 13 04:01:46.020394 kubelet[1973]: I1213 04:01:46.020138 1973 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 04:01:46.222986 kubelet[1973]: I1213 04:01:46.222809 1973 topology_manager.go:215] "Topology Admit Handler" podUID="98d8b762-4360-4fa8-ac99-fb755c641910" podNamespace="kube-system" podName="coredns-76f75df574-w79ct" Dec 13 04:01:46.234506 kubelet[1973]: I1213 04:01:46.234394 1973 topology_manager.go:215] "Topology Admit Handler" podUID="01a317db-2f2d-46f9-a5f7-a55399c73eaf" podNamespace="kube-system" podName="coredns-76f75df574-hk2mg" Dec 13 04:01:46.240448 systemd[1]: Created slice kubepods-burstable-pod98d8b762_4360_4fa8_ac99_fb755c641910.slice. Dec 13 04:01:46.259971 systemd[1]: Created slice kubepods-burstable-pod01a317db_2f2d_46f9_a5f7_a55399c73eaf.slice. Dec 13 04:01:46.339648 kubelet[1973]: I1213 04:01:46.339603 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98d8b762-4360-4fa8-ac99-fb755c641910-config-volume\") pod \"coredns-76f75df574-w79ct\" (UID: \"98d8b762-4360-4fa8-ac99-fb755c641910\") " pod="kube-system/coredns-76f75df574-w79ct" Dec 13 04:01:46.339999 kubelet[1973]: I1213 04:01:46.339987 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6j6\" (UniqueName: \"kubernetes.io/projected/01a317db-2f2d-46f9-a5f7-a55399c73eaf-kube-api-access-kv6j6\") pod \"coredns-76f75df574-hk2mg\" (UID: \"01a317db-2f2d-46f9-a5f7-a55399c73eaf\") " pod="kube-system/coredns-76f75df574-hk2mg" Dec 13 04:01:46.340156 kubelet[1973]: I1213 04:01:46.340142 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kth4\" (UniqueName: \"kubernetes.io/projected/98d8b762-4360-4fa8-ac99-fb755c641910-kube-api-access-6kth4\") pod \"coredns-76f75df574-w79ct\" (UID: \"98d8b762-4360-4fa8-ac99-fb755c641910\") " pod="kube-system/coredns-76f75df574-w79ct" Dec 13 04:01:46.340306 kubelet[1973]: I1213 04:01:46.340294 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a317db-2f2d-46f9-a5f7-a55399c73eaf-config-volume\") pod \"coredns-76f75df574-hk2mg\" (UID: \"01a317db-2f2d-46f9-a5f7-a55399c73eaf\") " pod="kube-system/coredns-76f75df574-hk2mg" Dec 13 04:01:46.555211 env[1142]: time="2024-12-13T04:01:46.555106510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-w79ct,Uid:98d8b762-4360-4fa8-ac99-fb755c641910,Namespace:kube-system,Attempt:0,}" Dec 13 04:01:46.577134 env[1142]: time="2024-12-13T04:01:46.576682044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hk2mg,Uid:01a317db-2f2d-46f9-a5f7-a55399c73eaf,Namespace:kube-system,Attempt:0,}" Dec 13 04:01:46.589258 systemd[1]: run-containerd-runc-k8s.io-93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883-runc.IjGWsl.mount: Deactivated successfully. Dec 13 04:01:46.756581 kubelet[1973]: I1213 04:01:46.756542 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x52vp" podStartSLOduration=7.803252562 podStartE2EDuration="27.756474193s" podCreationTimestamp="2024-12-13 04:01:19 +0000 UTC" firstStartedPulling="2024-12-13 04:01:21.561894148 +0000 UTC m=+13.478637110" lastFinishedPulling="2024-12-13 04:01:41.515115739 +0000 UTC m=+33.431858741" observedRunningTime="2024-12-13 04:01:46.755553283 +0000 UTC m=+38.672296235" watchObservedRunningTime="2024-12-13 04:01:46.756474193 +0000 UTC m=+38.673217146" Dec 13 04:01:48.683634 systemd-networkd[971]: cilium_host: Link UP Dec 13 04:01:48.692211 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 04:01:48.692307 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 04:01:48.691514 systemd-networkd[971]: cilium_net: Link UP Dec 13 04:01:48.692047 systemd-networkd[971]: cilium_net: Gained carrier Dec 13 04:01:48.692446 systemd-networkd[971]: cilium_host: Gained carrier Dec 13 04:01:48.848437 systemd-networkd[971]: cilium_vxlan: Link UP Dec 13 04:01:48.848445 systemd-networkd[971]: cilium_vxlan: Gained carrier Dec 13 04:01:48.975955 systemd-networkd[971]: cilium_net: Gained IPv6LL Dec 13 04:01:49.199927 systemd-networkd[971]: cilium_host: Gained IPv6LL Dec 13 04:01:49.656761 kernel: NET: Registered PF_ALG protocol family Dec 13 04:01:50.519440 systemd-networkd[971]: lxc_health: Link UP Dec 13 04:01:50.522810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:01:50.522973 systemd-networkd[971]: lxc_health: Gained carrier Dec 13 04:01:50.558014 systemd-networkd[971]: cilium_vxlan: Gained IPv6LL Dec 13 04:01:50.676514 systemd-networkd[971]: lxc04c56d346e5b: Link UP Dec 13 04:01:50.690529 kernel: eth0: renamed from tmp03c54 Dec 13 04:01:50.695740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc04c56d346e5b: link becomes ready Dec 13 04:01:50.696840 systemd-networkd[971]: lxc04c56d346e5b: Gained carrier Dec 13 04:01:51.155940 systemd-networkd[971]: lxcf1dcde551506: Link UP Dec 13 04:01:51.161691 kernel: eth0: renamed from tmp28249 Dec 13 04:01:51.165967 systemd-networkd[971]: lxcf1dcde551506: Gained carrier Dec 13 04:01:51.167280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf1dcde551506: link becomes ready Dec 13 04:01:52.207920 systemd-networkd[971]: lxc_health: Gained IPv6LL Dec 13 04:01:52.399871 systemd-networkd[971]: lxc04c56d346e5b: Gained IPv6LL Dec 13 04:01:53.040120 systemd-networkd[971]: lxcf1dcde551506: Gained IPv6LL Dec 13 04:01:55.361300 env[1142]: time="2024-12-13T04:01:55.353768918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:01:55.361300 env[1142]: time="2024-12-13T04:01:55.353805147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:01:55.361300 env[1142]: time="2024-12-13T04:01:55.353818031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:01:55.361300 env[1142]: time="2024-12-13T04:01:55.353936387Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28249059e706f7784582eb7870316379c161c80e6c3de82e7d4f3db64a54af13 pid=3167 runtime=io.containerd.runc.v2 Dec 13 04:01:55.375429 env[1142]: time="2024-12-13T04:01:55.371674627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:01:55.375429 env[1142]: time="2024-12-13T04:01:55.371738338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:01:55.375429 env[1142]: time="2024-12-13T04:01:55.371763466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:01:55.379595 systemd[1]: run-containerd-runc-k8s.io-28249059e706f7784582eb7870316379c161c80e6c3de82e7d4f3db64a54af13-runc.FK2jcz.mount: Deactivated successfully. Dec 13 04:01:55.389687 env[1142]: time="2024-12-13T04:01:55.385463221Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03c54a37c8f4c51a2496542075ae588709c0fe6482689d3e9c348f5bc243b9d2 pid=3161 runtime=io.containerd.runc.v2 Dec 13 04:01:55.393791 systemd[1]: Started cri-containerd-28249059e706f7784582eb7870316379c161c80e6c3de82e7d4f3db64a54af13.scope. Dec 13 04:01:55.432478 systemd[1]: Started cri-containerd-03c54a37c8f4c51a2496542075ae588709c0fe6482689d3e9c348f5bc243b9d2.scope. Dec 13 04:01:55.500680 env[1142]: time="2024-12-13T04:01:55.500595528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hk2mg,Uid:01a317db-2f2d-46f9-a5f7-a55399c73eaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"03c54a37c8f4c51a2496542075ae588709c0fe6482689d3e9c348f5bc243b9d2\"" Dec 13 04:01:55.505481 env[1142]: time="2024-12-13T04:01:55.504963121Z" level=info msg="CreateContainer within sandbox \"03c54a37c8f4c51a2496542075ae588709c0fe6482689d3e9c348f5bc243b9d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 04:01:55.511914 env[1142]: time="2024-12-13T04:01:55.511859135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-w79ct,Uid:98d8b762-4360-4fa8-ac99-fb755c641910,Namespace:kube-system,Attempt:0,} returns sandbox id \"28249059e706f7784582eb7870316379c161c80e6c3de82e7d4f3db64a54af13\"" Dec 13 04:01:55.517858 env[1142]: time="2024-12-13T04:01:55.517767501Z" level=info msg="CreateContainer within sandbox \"28249059e706f7784582eb7870316379c161c80e6c3de82e7d4f3db64a54af13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 04:01:55.547068 env[1142]: time="2024-12-13T04:01:55.546919355Z" level=info msg="CreateContainer within sandbox \"28249059e706f7784582eb7870316379c161c80e6c3de82e7d4f3db64a54af13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04b23caee3665b7325582c8d19c2c0ece506f4c2cf992849253bfcf075624cdf\"" Dec 13 04:01:55.548727 env[1142]: time="2024-12-13T04:01:55.548682751Z" level=info msg="StartContainer for \"04b23caee3665b7325582c8d19c2c0ece506f4c2cf992849253bfcf075624cdf\"" Dec 13 04:01:55.559204 env[1142]: time="2024-12-13T04:01:55.559141607Z" level=info msg="CreateContainer within sandbox \"03c54a37c8f4c51a2496542075ae588709c0fe6482689d3e9c348f5bc243b9d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a65c917dc6e0feefdc4ee239001f01bcecf4bbbbc1c6a6686278a40a50e4bbc\"" Dec 13 04:01:55.561805 env[1142]: time="2024-12-13T04:01:55.560212385Z" level=info msg="StartContainer for \"1a65c917dc6e0feefdc4ee239001f01bcecf4bbbbc1c6a6686278a40a50e4bbc\"" Dec 13 04:01:55.576538 systemd[1]: Started cri-containerd-04b23caee3665b7325582c8d19c2c0ece506f4c2cf992849253bfcf075624cdf.scope. Dec 13 04:01:55.598302 systemd[1]: Started cri-containerd-1a65c917dc6e0feefdc4ee239001f01bcecf4bbbbc1c6a6686278a40a50e4bbc.scope. Dec 13 04:01:55.665802 env[1142]: time="2024-12-13T04:01:55.665086289Z" level=info msg="StartContainer for \"1a65c917dc6e0feefdc4ee239001f01bcecf4bbbbc1c6a6686278a40a50e4bbc\" returns successfully" Dec 13 04:01:55.666104 env[1142]: time="2024-12-13T04:01:55.665104814Z" level=info msg="StartContainer for \"04b23caee3665b7325582c8d19c2c0ece506f4c2cf992849253bfcf075624cdf\" returns successfully" Dec 13 04:01:55.771925 kubelet[1973]: I1213 04:01:55.771875 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hk2mg" podStartSLOduration=36.771826394 podStartE2EDuration="36.771826394s" podCreationTimestamp="2024-12-13 04:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:01:55.769499445 +0000 UTC m=+47.686242397" watchObservedRunningTime="2024-12-13 04:01:55.771826394 +0000 UTC m=+47.688569357" Dec 13 04:01:56.583304 kubelet[1973]: I1213 04:01:56.583245 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-w79ct" podStartSLOduration=37.583148721 podStartE2EDuration="37.583148721s" podCreationTimestamp="2024-12-13 04:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:01:55.790980239 +0000 UTC m=+47.707723262" watchObservedRunningTime="2024-12-13 04:01:56.583148721 +0000 UTC m=+48.499891723" Dec 13 04:02:16.134858 systemd[1]: Started sshd@7-172.24.4.115:22-172.24.4.1:54620.service. Dec 13 04:02:17.619697 sshd[3319]: Accepted publickey for core from 172.24.4.1 port 54620 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:17.624015 sshd[3319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:17.636873 systemd-logind[1134]: New session 8 of user core. Dec 13 04:02:17.637492 systemd[1]: Started session-8.scope. Dec 13 04:02:18.403175 sshd[3319]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:18.408633 systemd[1]: sshd@7-172.24.4.115:22-172.24.4.1:54620.service: Deactivated successfully. Dec 13 04:02:18.410554 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 04:02:18.414412 systemd-logind[1134]: Session 8 logged out. Waiting for processes to exit. Dec 13 04:02:18.417051 systemd-logind[1134]: Removed session 8. Dec 13 04:02:23.415374 systemd[1]: Started sshd@8-172.24.4.115:22-172.24.4.1:54624.service. Dec 13 04:02:24.737913 sshd[3336]: Accepted publickey for core from 172.24.4.1 port 54624 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:24.742938 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:24.756006 systemd-logind[1134]: New session 9 of user core. Dec 13 04:02:24.758142 systemd[1]: Started session-9.scope. Dec 13 04:02:25.521851 sshd[3336]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:25.529150 systemd[1]: sshd@8-172.24.4.115:22-172.24.4.1:54624.service: Deactivated successfully. Dec 13 04:02:25.530908 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 04:02:25.532456 systemd-logind[1134]: Session 9 logged out. Waiting for processes to exit. Dec 13 04:02:25.535468 systemd-logind[1134]: Removed session 9. Dec 13 04:02:30.534626 systemd[1]: Started sshd@9-172.24.4.115:22-172.24.4.1:40106.service. Dec 13 04:02:31.939582 sshd[3352]: Accepted publickey for core from 172.24.4.1 port 40106 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:31.942292 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:31.954354 systemd-logind[1134]: New session 10 of user core. Dec 13 04:02:31.955635 systemd[1]: Started session-10.scope. Dec 13 04:02:32.885261 sshd[3352]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:32.891250 systemd[1]: sshd@9-172.24.4.115:22-172.24.4.1:40106.service: Deactivated successfully. Dec 13 04:02:32.892865 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 04:02:32.894476 systemd-logind[1134]: Session 10 logged out. Waiting for processes to exit. Dec 13 04:02:32.897826 systemd-logind[1134]: Removed session 10. Dec 13 04:02:37.900710 systemd[1]: Started sshd@10-172.24.4.115:22-172.24.4.1:41682.service. Dec 13 04:02:38.856592 sshd[3365]: Accepted publickey for core from 172.24.4.1 port 41682 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:38.860247 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:38.871833 systemd-logind[1134]: New session 11 of user core. Dec 13 04:02:38.873594 systemd[1]: Started session-11.scope. Dec 13 04:02:39.765323 sshd[3365]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:39.775348 systemd[1]: Started sshd@11-172.24.4.115:22-172.24.4.1:41694.service. Dec 13 04:02:39.778640 systemd[1]: sshd@10-172.24.4.115:22-172.24.4.1:41682.service: Deactivated successfully. Dec 13 04:02:39.780242 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 04:02:39.784100 systemd-logind[1134]: Session 11 logged out. Waiting for processes to exit. Dec 13 04:02:39.787134 systemd-logind[1134]: Removed session 11. Dec 13 04:02:41.452201 sshd[3376]: Accepted publickey for core from 172.24.4.1 port 41694 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:41.454746 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:41.465022 systemd-logind[1134]: New session 12 of user core. Dec 13 04:02:41.467145 systemd[1]: Started session-12.scope. Dec 13 04:02:42.332654 sshd[3376]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:42.339965 systemd[1]: Started sshd@12-172.24.4.115:22-172.24.4.1:41696.service. Dec 13 04:02:42.348174 systemd[1]: sshd@11-172.24.4.115:22-172.24.4.1:41694.service: Deactivated successfully. Dec 13 04:02:42.350207 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 04:02:42.352831 systemd-logind[1134]: Session 12 logged out. Waiting for processes to exit. Dec 13 04:02:42.355402 systemd-logind[1134]: Removed session 12. Dec 13 04:02:43.715903 sshd[3386]: Accepted publickey for core from 172.24.4.1 port 41696 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:43.718652 sshd[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:43.730809 systemd-logind[1134]: New session 13 of user core. Dec 13 04:02:43.731733 systemd[1]: Started session-13.scope. Dec 13 04:02:44.451172 sshd[3386]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:44.455016 systemd-logind[1134]: Session 13 logged out. Waiting for processes to exit. Dec 13 04:02:44.455773 systemd[1]: sshd@12-172.24.4.115:22-172.24.4.1:41696.service: Deactivated successfully. Dec 13 04:02:44.456496 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 04:02:44.457599 systemd-logind[1134]: Removed session 13. Dec 13 04:02:49.464560 systemd[1]: Started sshd@13-172.24.4.115:22-172.24.4.1:42580.service. Dec 13 04:02:50.882407 sshd[3399]: Accepted publickey for core from 172.24.4.1 port 42580 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:50.885564 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:50.895913 systemd-logind[1134]: New session 14 of user core. Dec 13 04:02:50.898004 systemd[1]: Started session-14.scope. Dec 13 04:02:51.695957 sshd[3399]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:51.702852 systemd[1]: sshd@13-172.24.4.115:22-172.24.4.1:42580.service: Deactivated successfully. Dec 13 04:02:51.704915 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 04:02:51.706710 systemd-logind[1134]: Session 14 logged out. Waiting for processes to exit. Dec 13 04:02:51.710212 systemd-logind[1134]: Removed session 14. Dec 13 04:02:56.710256 systemd[1]: Started sshd@14-172.24.4.115:22-172.24.4.1:59110.service. Dec 13 04:02:58.221527 sshd[3414]: Accepted publickey for core from 172.24.4.1 port 59110 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:02:58.225835 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:02:58.241716 systemd-logind[1134]: New session 15 of user core. Dec 13 04:02:58.244156 systemd[1]: Started session-15.scope. Dec 13 04:02:59.065520 sshd[3414]: pam_unix(sshd:session): session closed for user core Dec 13 04:02:59.071329 systemd[1]: sshd@14-172.24.4.115:22-172.24.4.1:59110.service: Deactivated successfully. Dec 13 04:02:59.072899 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 04:02:59.079271 systemd[1]: Started sshd@15-172.24.4.115:22-172.24.4.1:59124.service. Dec 13 04:02:59.085245 systemd-logind[1134]: Session 15 logged out. Waiting for processes to exit. Dec 13 04:02:59.087802 systemd-logind[1134]: Removed session 15. Dec 13 04:03:00.604311 sshd[3426]: Accepted publickey for core from 172.24.4.1 port 59124 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:00.607149 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:00.617800 systemd-logind[1134]: New session 16 of user core. Dec 13 04:03:00.618272 systemd[1]: Started session-16.scope. Dec 13 04:03:02.360355 sshd[3426]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:02.367653 systemd[1]: Started sshd@16-172.24.4.115:22-172.24.4.1:59132.service. Dec 13 04:03:02.368948 systemd[1]: sshd@15-172.24.4.115:22-172.24.4.1:59124.service: Deactivated successfully. Dec 13 04:03:02.373380 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 04:03:02.375338 systemd-logind[1134]: Session 16 logged out. Waiting for processes to exit. Dec 13 04:03:02.379812 systemd-logind[1134]: Removed session 16. Dec 13 04:03:03.641978 sshd[3435]: Accepted publickey for core from 172.24.4.1 port 59132 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:03.644816 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:03.657121 systemd-logind[1134]: New session 17 of user core. Dec 13 04:03:03.658130 systemd[1]: Started session-17.scope. Dec 13 04:03:07.207961 systemd[1]: Started sshd@17-172.24.4.115:22-172.24.4.1:48254.service. Dec 13 04:03:07.240785 sshd[3435]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:07.334058 systemd[1]: sshd@16-172.24.4.115:22-172.24.4.1:59132.service: Deactivated successfully. Dec 13 04:03:07.335626 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 04:03:07.338104 systemd-logind[1134]: Session 17 logged out. Waiting for processes to exit. Dec 13 04:03:07.341443 systemd-logind[1134]: Removed session 17. Dec 13 04:03:08.622773 sshd[3452]: Accepted publickey for core from 172.24.4.1 port 48254 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:08.626122 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:08.638274 systemd-logind[1134]: New session 18 of user core. Dec 13 04:03:08.639767 systemd[1]: Started session-18.scope. Dec 13 04:03:09.848837 sshd[3452]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:09.861038 systemd[1]: Started sshd@18-172.24.4.115:22-172.24.4.1:48262.service. Dec 13 04:03:09.862580 systemd[1]: sshd@17-172.24.4.115:22-172.24.4.1:48254.service: Deactivated successfully. Dec 13 04:03:09.864753 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 04:03:09.869856 systemd-logind[1134]: Session 18 logged out. Waiting for processes to exit. Dec 13 04:03:09.873367 systemd-logind[1134]: Removed session 18. Dec 13 04:03:11.141922 sshd[3463]: Accepted publickey for core from 172.24.4.1 port 48262 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:11.142473 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:11.150798 systemd-logind[1134]: New session 19 of user core. Dec 13 04:03:11.152008 systemd[1]: Started session-19.scope. Dec 13 04:03:12.236323 sshd[3463]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:12.240606 systemd[1]: sshd@18-172.24.4.115:22-172.24.4.1:48262.service: Deactivated successfully. Dec 13 04:03:12.241618 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 04:03:12.243033 systemd-logind[1134]: Session 19 logged out. Waiting for processes to exit. Dec 13 04:03:12.244848 systemd-logind[1134]: Removed session 19. Dec 13 04:03:17.246367 systemd[1]: Started sshd@19-172.24.4.115:22-172.24.4.1:45050.service. Dec 13 04:03:18.379604 sshd[3476]: Accepted publickey for core from 172.24.4.1 port 45050 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:18.382092 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:18.391457 systemd-logind[1134]: New session 20 of user core. Dec 13 04:03:18.394913 systemd[1]: Started session-20.scope. Dec 13 04:03:19.122461 sshd[3476]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:19.128198 systemd[1]: sshd@19-172.24.4.115:22-172.24.4.1:45050.service: Deactivated successfully. Dec 13 04:03:19.129941 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 04:03:19.131368 systemd-logind[1134]: Session 20 logged out. Waiting for processes to exit. Dec 13 04:03:19.133613 systemd-logind[1134]: Removed session 20. Dec 13 04:03:24.134095 systemd[1]: Started sshd@20-172.24.4.115:22-172.24.4.1:45052.service. Dec 13 04:03:25.267394 sshd[3493]: Accepted publickey for core from 172.24.4.1 port 45052 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:25.270849 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:25.282133 systemd-logind[1134]: New session 21 of user core. Dec 13 04:03:25.282992 systemd[1]: Started session-21.scope. Dec 13 04:03:26.150045 sshd[3493]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:26.154550 systemd[1]: sshd@20-172.24.4.115:22-172.24.4.1:45052.service: Deactivated successfully. Dec 13 04:03:26.155484 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 04:03:26.156833 systemd-logind[1134]: Session 21 logged out. Waiting for processes to exit. Dec 13 04:03:26.158077 systemd-logind[1134]: Removed session 21. Dec 13 04:03:31.162152 systemd[1]: Started sshd@21-172.24.4.115:22-172.24.4.1:34628.service. Dec 13 04:03:32.603377 sshd[3506]: Accepted publickey for core from 172.24.4.1 port 34628 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:32.606055 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:32.617825 systemd-logind[1134]: New session 22 of user core. Dec 13 04:03:32.619568 systemd[1]: Started session-22.scope. Dec 13 04:03:33.441307 sshd[3506]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:33.446279 systemd[1]: Started sshd@22-172.24.4.115:22-172.24.4.1:34638.service. Dec 13 04:03:33.448030 systemd[1]: sshd@21-172.24.4.115:22-172.24.4.1:34628.service: Deactivated successfully. Dec 13 04:03:33.449092 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 04:03:33.451521 systemd-logind[1134]: Session 22 logged out. Waiting for processes to exit. Dec 13 04:03:33.453676 systemd-logind[1134]: Removed session 22. Dec 13 04:03:34.638462 sshd[3517]: Accepted publickey for core from 172.24.4.1 port 34638 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:34.641308 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:34.652846 systemd-logind[1134]: New session 23 of user core. Dec 13 04:03:34.653784 systemd[1]: Started session-23.scope. Dec 13 04:03:37.858851 systemd[1]: run-containerd-runc-k8s.io-93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883-runc.UT5a2C.mount: Deactivated successfully. Dec 13 04:03:37.915773 env[1142]: time="2024-12-13T04:03:37.915639382Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:03:37.941276 env[1142]: time="2024-12-13T04:03:37.941216030Z" level=info msg="StopContainer for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" with timeout 30 (s)" Dec 13 04:03:37.941884 env[1142]: time="2024-12-13T04:03:37.941844593Z" level=info msg="StopContainer for \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\" with timeout 2 (s)" Dec 13 04:03:37.942432 env[1142]: time="2024-12-13T04:03:37.942407592Z" level=info msg="Stop container \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" with signal terminated" Dec 13 04:03:37.942652 env[1142]: time="2024-12-13T04:03:37.942629981Z" level=info msg="Stop container \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\" with signal terminated" Dec 13 04:03:37.963397 systemd-networkd[971]: lxc_health: Link DOWN Dec 13 04:03:37.963418 systemd-networkd[971]: lxc_health: Lost carrier Dec 13 04:03:37.970278 systemd[1]: cri-containerd-e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f.scope: Deactivated successfully. Dec 13 04:03:38.003881 systemd[1]: cri-containerd-93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883.scope: Deactivated successfully. Dec 13 04:03:38.004242 systemd[1]: cri-containerd-93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883.scope: Consumed 8.768s CPU time. Dec 13 04:03:38.019234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f-rootfs.mount: Deactivated successfully. Dec 13 04:03:38.042044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883-rootfs.mount: Deactivated successfully. Dec 13 04:03:38.093288 env[1142]: time="2024-12-13T04:03:38.093151511Z" level=info msg="shim disconnected" id=e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f Dec 13 04:03:38.094014 env[1142]: time="2024-12-13T04:03:38.093941498Z" level=warning msg="cleaning up after shim disconnected" id=e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f namespace=k8s.io Dec 13 04:03:38.094328 env[1142]: time="2024-12-13T04:03:38.094261119Z" level=info msg="cleaning up dead shim" Dec 13 04:03:38.095080 env[1142]: time="2024-12-13T04:03:38.093621105Z" level=info msg="shim disconnected" id=93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883 Dec 13 04:03:38.095464 env[1142]: time="2024-12-13T04:03:38.095417024Z" level=warning msg="cleaning up after shim disconnected" id=93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883 namespace=k8s.io Dec 13 04:03:38.095732 env[1142]: time="2024-12-13T04:03:38.095648901Z" level=info msg="cleaning up dead shim" Dec 13 04:03:38.117136 env[1142]: time="2024-12-13T04:03:38.114177896Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3588 runtime=io.containerd.runc.v2\n" Dec 13 04:03:38.127250 env[1142]: time="2024-12-13T04:03:38.127075304Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3589 runtime=io.containerd.runc.v2\n" Dec 13 04:03:38.143813 env[1142]: time="2024-12-13T04:03:38.143727728Z" level=info msg="StopContainer for \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\" returns successfully" Dec 13 04:03:38.149739 env[1142]: time="2024-12-13T04:03:38.147459620Z" level=info msg="StopPodSandbox for \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\"" Dec 13 04:03:38.149739 env[1142]: time="2024-12-13T04:03:38.148093062Z" level=info msg="Container to stop \"1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.149739 env[1142]: time="2024-12-13T04:03:38.148295202Z" level=info msg="Container to stop \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.149739 env[1142]: time="2024-12-13T04:03:38.148560242Z" level=info msg="Container to stop \"cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.149739 env[1142]: time="2024-12-13T04:03:38.148686920Z" level=info msg="Container to stop \"8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.149739 env[1142]: time="2024-12-13T04:03:38.148756521Z" level=info msg="Container to stop \"3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.153605 env[1142]: time="2024-12-13T04:03:38.153540733Z" level=info msg="StopContainer for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" returns successfully" Dec 13 04:03:38.154534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b-shm.mount: Deactivated successfully. Dec 13 04:03:38.157602 env[1142]: time="2024-12-13T04:03:38.157053793Z" level=info msg="StopPodSandbox for \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\"" Dec 13 04:03:38.157602 env[1142]: time="2024-12-13T04:03:38.157220086Z" level=info msg="Container to stop \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:38.179487 systemd[1]: cri-containerd-2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a.scope: Deactivated successfully. Dec 13 04:03:38.195277 systemd[1]: cri-containerd-e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b.scope: Deactivated successfully. Dec 13 04:03:38.257894 env[1142]: time="2024-12-13T04:03:38.257825070Z" level=info msg="shim disconnected" id=e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b Dec 13 04:03:38.258810 env[1142]: time="2024-12-13T04:03:38.258471998Z" level=warning msg="cleaning up after shim disconnected" id=e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b namespace=k8s.io Dec 13 04:03:38.258810 env[1142]: time="2024-12-13T04:03:38.258525648Z" level=info msg="cleaning up dead shim" Dec 13 04:03:38.259168 env[1142]: time="2024-12-13T04:03:38.259087335Z" level=info msg="shim disconnected" id=2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a Dec 13 04:03:38.259356 env[1142]: time="2024-12-13T04:03:38.259333609Z" level=warning msg="cleaning up after shim disconnected" id=2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a namespace=k8s.io Dec 13 04:03:38.259515 env[1142]: time="2024-12-13T04:03:38.259496766Z" level=info msg="cleaning up dead shim" Dec 13 04:03:38.274541 env[1142]: time="2024-12-13T04:03:38.274479047Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3654 runtime=io.containerd.runc.v2\n" Dec 13 04:03:38.275240 env[1142]: time="2024-12-13T04:03:38.275209121Z" level=info msg="TearDown network for sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" successfully" Dec 13 04:03:38.275359 env[1142]: time="2024-12-13T04:03:38.275337653Z" level=info msg="StopPodSandbox for \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" returns successfully" Dec 13 04:03:38.286817 env[1142]: time="2024-12-13T04:03:38.286722605Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\n" Dec 13 04:03:38.287618 env[1142]: time="2024-12-13T04:03:38.287589416Z" level=info msg="TearDown network for sandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" successfully" Dec 13 04:03:38.287799 env[1142]: time="2024-12-13T04:03:38.287776338Z" level=info msg="StopPodSandbox for \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" returns successfully" Dec 13 04:03:38.405220 kubelet[1973]: E1213 04:03:38.405028 1973 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:03:38.467045 kubelet[1973]: I1213 04:03:38.467004 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whzxp\" (UniqueName: \"kubernetes.io/projected/6686003e-0404-4e8c-bfda-9b230d216233-kube-api-access-whzxp\") pod \"6686003e-0404-4e8c-bfda-9b230d216233\" (UID: \"6686003e-0404-4e8c-bfda-9b230d216233\") " Dec 13 04:03:38.467307 kubelet[1973]: I1213 04:03:38.467293 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-lib-modules\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.467407 kubelet[1973]: I1213 04:03:38.467395 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hubble-tls\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.467497 kubelet[1973]: I1213 04:03:38.467485 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-cgroup\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.467589 kubelet[1973]: I1213 04:03:38.467578 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-kernel\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.467705 kubelet[1973]: I1213 04:03:38.467693 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-net\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.467806 kubelet[1973]: I1213 04:03:38.467794 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-run\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.467921 kubelet[1973]: I1213 04:03:38.467910 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-xtables-lock\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468020 kubelet[1973]: I1213 04:03:38.468008 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hostproc\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468122 kubelet[1973]: I1213 04:03:38.468110 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-config-path\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468212 kubelet[1973]: I1213 04:03:38.468201 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-etc-cni-netd\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468421 kubelet[1973]: I1213 04:03:38.468409 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6686003e-0404-4e8c-bfda-9b230d216233-cilium-config-path\") pod \"6686003e-0404-4e8c-bfda-9b230d216233\" (UID: \"6686003e-0404-4e8c-bfda-9b230d216233\") " Dec 13 04:03:38.468521 kubelet[1973]: I1213 04:03:38.468509 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccbs2\" (UniqueName: \"kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-kube-api-access-ccbs2\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468608 kubelet[1973]: I1213 04:03:38.468597 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-bpf-maps\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468737 kubelet[1973]: I1213 04:03:38.468710 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cni-path\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.468862 kubelet[1973]: I1213 04:03:38.468850 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a34faea-a500-4bcb-85ce-7b85a42b01e0-clustermesh-secrets\") pod \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\" (UID: \"1a34faea-a500-4bcb-85ce-7b85a42b01e0\") " Dec 13 04:03:38.510175 kubelet[1973]: I1213 04:03:38.507553 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.510733 kubelet[1973]: I1213 04:03:38.510691 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.514378 kubelet[1973]: I1213 04:03:38.514347 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6686003e-0404-4e8c-bfda-9b230d216233-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6686003e-0404-4e8c-bfda-9b230d216233" (UID: "6686003e-0404-4e8c-bfda-9b230d216233"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:03:38.518478 kubelet[1973]: I1213 04:03:38.518417 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-kube-api-access-ccbs2" (OuterVolumeSpecName: "kube-api-access-ccbs2") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "kube-api-access-ccbs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:38.518902 kubelet[1973]: I1213 04:03:38.518881 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.519036 kubelet[1973]: I1213 04:03:38.519019 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.520417 kubelet[1973]: I1213 04:03:38.520342 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:38.520494 kubelet[1973]: I1213 04:03:38.520472 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.520588 kubelet[1973]: I1213 04:03:38.520532 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.520642 kubelet[1973]: I1213 04:03:38.520608 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.520775 kubelet[1973]: I1213 04:03:38.520726 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.520865 kubelet[1973]: I1213 04:03:38.520821 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.520926 kubelet[1973]: I1213 04:03:38.486264 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:38.521442 kubelet[1973]: I1213 04:03:38.521249 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a34faea-a500-4bcb-85ce-7b85a42b01e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:03:38.523201 kubelet[1973]: I1213 04:03:38.523100 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6686003e-0404-4e8c-bfda-9b230d216233-kube-api-access-whzxp" (OuterVolumeSpecName: "kube-api-access-whzxp") pod "6686003e-0404-4e8c-bfda-9b230d216233" (UID: "6686003e-0404-4e8c-bfda-9b230d216233"). InnerVolumeSpecName "kube-api-access-whzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:38.523415 kubelet[1973]: I1213 04:03:38.523385 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a34faea-a500-4bcb-85ce-7b85a42b01e0" (UID: "1a34faea-a500-4bcb-85ce-7b85a42b01e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:03:38.570226 kubelet[1973]: I1213 04:03:38.570160 1973 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-whzxp\" (UniqueName: \"kubernetes.io/projected/6686003e-0404-4e8c-bfda-9b230d216233-kube-api-access-whzxp\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570226 kubelet[1973]: I1213 04:03:38.570239 1973 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-lib-modules\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570521 kubelet[1973]: I1213 04:03:38.570343 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-cgroup\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570521 kubelet[1973]: I1213 04:03:38.570380 1973 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hubble-tls\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570521 kubelet[1973]: I1213 04:03:38.570459 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-kernel\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570521 kubelet[1973]: I1213 04:03:38.570490 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-host-proc-sys-net\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570521 kubelet[1973]: I1213 04:03:38.570522 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-run\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570553 1973 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-xtables-lock\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570581 1973 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-hostproc\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570610 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cilium-config-path\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570643 1973 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a34faea-a500-4bcb-85ce-7b85a42b01e0-clustermesh-secrets\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570702 1973 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-etc-cni-netd\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570735 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6686003e-0404-4e8c-bfda-9b230d216233-cilium-config-path\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570766 kubelet[1973]: I1213 04:03:38.570768 1973 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ccbs2\" (UniqueName: \"kubernetes.io/projected/1a34faea-a500-4bcb-85ce-7b85a42b01e0-kube-api-access-ccbs2\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570995 kubelet[1973]: I1213 04:03:38.570797 1973 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-bpf-maps\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.570995 kubelet[1973]: I1213 04:03:38.570825 1973 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a34faea-a500-4bcb-85ce-7b85a42b01e0-cni-path\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:38.853125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b-rootfs.mount: Deactivated successfully. Dec 13 04:03:38.853374 systemd[1]: var-lib-kubelet-pods-1a34faea\x2da500\x2d4bcb\x2d85ce\x2d7b85a42b01e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccbs2.mount: Deactivated successfully. Dec 13 04:03:38.853537 systemd[1]: var-lib-kubelet-pods-1a34faea\x2da500\x2d4bcb\x2d85ce\x2d7b85a42b01e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:03:38.853767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a-rootfs.mount: Deactivated successfully. Dec 13 04:03:38.853916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a-shm.mount: Deactivated successfully. Dec 13 04:03:38.854084 systemd[1]: var-lib-kubelet-pods-6686003e\x2d0404\x2d4e8c\x2dbfda\x2d9b230d216233-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwhzxp.mount: Deactivated successfully. Dec 13 04:03:38.854254 systemd[1]: var-lib-kubelet-pods-1a34faea\x2da500\x2d4bcb\x2d85ce\x2d7b85a42b01e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:03:39.187627 systemd[1]: Removed slice kubepods-besteffort-pod6686003e_0404_4e8c_bfda_9b230d216233.slice. Dec 13 04:03:39.194497 kubelet[1973]: I1213 04:03:39.194380 1973 scope.go:117] "RemoveContainer" containerID="e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f" Dec 13 04:03:39.206775 env[1142]: time="2024-12-13T04:03:39.205457912Z" level=info msg="RemoveContainer for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\"" Dec 13 04:03:39.236022 systemd[1]: Removed slice kubepods-burstable-pod1a34faea_a500_4bcb_85ce_7b85a42b01e0.slice. Dec 13 04:03:39.236239 systemd[1]: kubepods-burstable-pod1a34faea_a500_4bcb_85ce_7b85a42b01e0.slice: Consumed 8.874s CPU time. Dec 13 04:03:39.246633 env[1142]: time="2024-12-13T04:03:39.246557195Z" level=info msg="RemoveContainer for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" returns successfully" Dec 13 04:03:39.262469 kubelet[1973]: I1213 04:03:39.262412 1973 scope.go:117] "RemoveContainer" containerID="e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f" Dec 13 04:03:39.263681 env[1142]: time="2024-12-13T04:03:39.263491789Z" level=error msg="ContainerStatus for \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\": not found" Dec 13 04:03:39.278822 kubelet[1973]: E1213 04:03:39.278778 1973 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\": not found" containerID="e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f" Dec 13 04:03:39.279245 kubelet[1973]: I1213 04:03:39.279228 1973 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f"} err="failed to get container status \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e891c1a95e5d0ab35d517cbaccb901d575513a0fba049da20bff2417431e905f\": not found" Dec 13 04:03:39.279340 kubelet[1973]: I1213 04:03:39.279329 1973 scope.go:117] "RemoveContainer" containerID="93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883" Dec 13 04:03:39.282873 env[1142]: time="2024-12-13T04:03:39.282112958Z" level=info msg="RemoveContainer for \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\"" Dec 13 04:03:39.318791 env[1142]: time="2024-12-13T04:03:39.318619530Z" level=info msg="RemoveContainer for \"93995fba88da4d91c768698692f0d62d2404e5c2205cd1e8500e5ed14550d883\" returns successfully" Dec 13 04:03:39.319151 kubelet[1973]: I1213 04:03:39.319125 1973 scope.go:117] "RemoveContainer" containerID="3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17" Dec 13 04:03:39.320496 env[1142]: time="2024-12-13T04:03:39.320462257Z" level=info msg="RemoveContainer for \"3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17\"" Dec 13 04:03:39.349424 env[1142]: time="2024-12-13T04:03:39.349339874Z" level=info msg="RemoveContainer for \"3fc109a7b68b60077c5a81401837d624cd661c921230980e57858a3384067d17\" returns successfully" Dec 13 04:03:39.349992 kubelet[1973]: I1213 04:03:39.349965 1973 scope.go:117] "RemoveContainer" containerID="1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b" Dec 13 04:03:39.352252 env[1142]: time="2024-12-13T04:03:39.352180499Z" level=info msg="RemoveContainer for \"1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b\"" Dec 13 04:03:39.382222 env[1142]: time="2024-12-13T04:03:39.382018102Z" level=info msg="RemoveContainer for \"1d9120b339601d869f082673e9ce8c127d727d16334c4cbfe705c1ef458b949b\" returns successfully" Dec 13 04:03:39.382861 kubelet[1973]: I1213 04:03:39.382815 1973 scope.go:117] "RemoveContainer" containerID="8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65" Dec 13 04:03:39.386781 env[1142]: time="2024-12-13T04:03:39.386705723Z" level=info msg="RemoveContainer for \"8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65\"" Dec 13 04:03:39.421305 env[1142]: time="2024-12-13T04:03:39.421185640Z" level=info msg="RemoveContainer for \"8947465f3528f5941543093ae6f5e9253d7555b442ab297b613be2d43eb20d65\" returns successfully" Dec 13 04:03:39.421953 kubelet[1973]: I1213 04:03:39.421911 1973 scope.go:117] "RemoveContainer" containerID="cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599" Dec 13 04:03:39.425435 env[1142]: time="2024-12-13T04:03:39.425364243Z" level=info msg="RemoveContainer for \"cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599\"" Dec 13 04:03:39.451031 env[1142]: time="2024-12-13T04:03:39.450850750Z" level=info msg="RemoveContainer for \"cf5d113fed3f66371a869a39197ace24bb73e6926f74e7baaf448302b37f7599\" returns successfully" Dec 13 04:03:39.761982 sshd[3517]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:39.773871 systemd[1]: Started sshd@23-172.24.4.115:22-172.24.4.1:54876.service. Dec 13 04:03:39.777293 systemd[1]: sshd@22-172.24.4.115:22-172.24.4.1:34638.service: Deactivated successfully. Dec 13 04:03:39.781059 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 04:03:39.782412 systemd[1]: session-23.scope: Consumed 1.651s CPU time. Dec 13 04:03:39.784954 systemd-logind[1134]: Session 23 logged out. Waiting for processes to exit. Dec 13 04:03:39.789022 systemd-logind[1134]: Removed session 23. Dec 13 04:03:40.294782 kubelet[1973]: I1213 04:03:40.294724 1973 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" path="/var/lib/kubelet/pods/1a34faea-a500-4bcb-85ce-7b85a42b01e0/volumes" Dec 13 04:03:40.297031 kubelet[1973]: I1213 04:03:40.296995 1973 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6686003e-0404-4e8c-bfda-9b230d216233" path="/var/lib/kubelet/pods/6686003e-0404-4e8c-bfda-9b230d216233/volumes" Dec 13 04:03:41.047806 sshd[3687]: Accepted publickey for core from 172.24.4.1 port 54876 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:41.051250 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:41.059540 systemd-logind[1134]: New session 24 of user core. Dec 13 04:03:41.062084 systemd[1]: Started session-24.scope. Dec 13 04:03:41.702793 kubelet[1973]: I1213 04:03:41.702769 1973 setters.go:568] "Node became not ready" node="ci-3510-3-6-f-1413c5ec2e.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T04:03:41Z","lastTransitionTime":"2024-12-13T04:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 04:03:42.410027 kubelet[1973]: I1213 04:03:42.409968 1973 topology_manager.go:215] "Topology Admit Handler" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" podNamespace="kube-system" podName="cilium-kkhst" Dec 13 04:03:42.410276 kubelet[1973]: E1213 04:03:42.410064 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" containerName="mount-bpf-fs" Dec 13 04:03:42.410276 kubelet[1973]: E1213 04:03:42.410077 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" containerName="clean-cilium-state" Dec 13 04:03:42.410276 kubelet[1973]: E1213 04:03:42.410088 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6686003e-0404-4e8c-bfda-9b230d216233" containerName="cilium-operator" Dec 13 04:03:42.410276 kubelet[1973]: E1213 04:03:42.410096 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" containerName="mount-cgroup" Dec 13 04:03:42.410276 kubelet[1973]: E1213 04:03:42.410104 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" containerName="apply-sysctl-overwrites" Dec 13 04:03:42.410276 kubelet[1973]: E1213 04:03:42.410111 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" containerName="cilium-agent" Dec 13 04:03:42.410276 kubelet[1973]: I1213 04:03:42.410143 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="6686003e-0404-4e8c-bfda-9b230d216233" containerName="cilium-operator" Dec 13 04:03:42.410276 kubelet[1973]: I1213 04:03:42.410150 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a34faea-a500-4bcb-85ce-7b85a42b01e0" containerName="cilium-agent" Dec 13 04:03:42.432504 systemd[1]: Created slice kubepods-burstable-pod1095ca6a_1b5f_4847_b4ba_dcaac6c2656c.slice. Dec 13 04:03:42.534094 sshd[3687]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:42.544063 systemd[1]: sshd@23-172.24.4.115:22-172.24.4.1:54876.service: Deactivated successfully. Dec 13 04:03:42.545969 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 04:03:42.547783 systemd-logind[1134]: Session 24 logged out. Waiting for processes to exit. Dec 13 04:03:42.551776 systemd[1]: Started sshd@24-172.24.4.115:22-172.24.4.1:54886.service. Dec 13 04:03:42.556327 systemd-logind[1134]: Removed session 24. Dec 13 04:03:42.621779 kubelet[1973]: I1213 04:03:42.621724 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-kernel\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.622368 kubelet[1973]: I1213 04:03:42.622273 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-lib-modules\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.622741 kubelet[1973]: I1213 04:03:42.622606 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-clustermesh-secrets\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.623069 kubelet[1973]: I1213 04:03:42.622968 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cni-path\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.623417 kubelet[1973]: I1213 04:03:42.623319 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-net\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.623751 kubelet[1973]: I1213 04:03:42.623645 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-config-path\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.624080 kubelet[1973]: I1213 04:03:42.623986 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-run\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.624467 kubelet[1973]: I1213 04:03:42.624363 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-bpf-maps\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.624840 kubelet[1973]: I1213 04:03:42.624812 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hostproc\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.625189 kubelet[1973]: I1213 04:03:42.625090 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hubble-tls\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.625523 kubelet[1973]: I1213 04:03:42.625425 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpwq\" (UniqueName: \"kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-kube-api-access-wrpwq\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.625873 kubelet[1973]: I1213 04:03:42.625780 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-cgroup\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.626178 kubelet[1973]: I1213 04:03:42.626084 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-etc-cni-netd\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.626571 kubelet[1973]: I1213 04:03:42.626540 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-xtables-lock\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:42.626969 kubelet[1973]: I1213 04:03:42.626871 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-ipsec-secrets\") pod \"cilium-kkhst\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " pod="kube-system/cilium-kkhst" Dec 13 04:03:43.037333 env[1142]: time="2024-12-13T04:03:43.037180049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkhst,Uid:1095ca6a-1b5f-4847-b4ba-dcaac6c2656c,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:43.062021 env[1142]: time="2024-12-13T04:03:43.061869109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:43.062021 env[1142]: time="2024-12-13T04:03:43.061938940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:43.062021 env[1142]: time="2024-12-13T04:03:43.061954138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:43.062470 env[1142]: time="2024-12-13T04:03:43.062133696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002 pid=3715 runtime=io.containerd.runc.v2 Dec 13 04:03:43.079050 systemd[1]: Started cri-containerd-a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002.scope. Dec 13 04:03:43.157857 env[1142]: time="2024-12-13T04:03:43.157758258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkhst,Uid:1095ca6a-1b5f-4847-b4ba-dcaac6c2656c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\"" Dec 13 04:03:43.167223 env[1142]: time="2024-12-13T04:03:43.166095100Z" level=info msg="CreateContainer within sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:03:43.180405 env[1142]: time="2024-12-13T04:03:43.180320267Z" level=info msg="CreateContainer within sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\"" Dec 13 04:03:43.188508 env[1142]: time="2024-12-13T04:03:43.187920242Z" level=info msg="StartContainer for \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\"" Dec 13 04:03:43.224224 systemd[1]: Started cri-containerd-4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704.scope. Dec 13 04:03:43.245327 systemd[1]: cri-containerd-4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704.scope: Deactivated successfully. Dec 13 04:03:43.269744 env[1142]: time="2024-12-13T04:03:43.269549208Z" level=info msg="shim disconnected" id=4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704 Dec 13 04:03:43.270010 env[1142]: time="2024-12-13T04:03:43.269988924Z" level=warning msg="cleaning up after shim disconnected" id=4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704 namespace=k8s.io Dec 13 04:03:43.270106 env[1142]: time="2024-12-13T04:03:43.270088422Z" level=info msg="cleaning up dead shim" Dec 13 04:03:43.281223 env[1142]: time="2024-12-13T04:03:43.281156130Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3776 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:03:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:03:43.281819 env[1142]: time="2024-12-13T04:03:43.281704001Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Dec 13 04:03:43.282831 env[1142]: time="2024-12-13T04:03:43.282794362Z" level=error msg="Failed to pipe stdout of container \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\"" error="reading from a closed fifo" Dec 13 04:03:43.282914 env[1142]: time="2024-12-13T04:03:43.282797959Z" level=error msg="Failed to pipe stderr of container \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\"" error="reading from a closed fifo" Dec 13 04:03:43.292373 env[1142]: time="2024-12-13T04:03:43.291750018Z" level=error msg="StartContainer for \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:03:43.292465 kubelet[1973]: E1213 04:03:43.292032 1973 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704" Dec 13 04:03:43.294332 kubelet[1973]: E1213 04:03:43.294180 1973 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:03:43.294332 kubelet[1973]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:03:43.294332 kubelet[1973]: rm /hostbin/cilium-mount Dec 13 04:03:43.294473 kubelet[1973]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wrpwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kkhst_kube-system(1095ca6a-1b5f-4847-b4ba-dcaac6c2656c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:03:43.294473 kubelet[1973]: E1213 04:03:43.294237 1973 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kkhst" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" Dec 13 04:03:43.407622 kubelet[1973]: E1213 04:03:43.407534 1973 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:03:44.055226 sshd[3700]: Accepted publickey for core from 172.24.4.1 port 54886 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:44.058294 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:44.069810 systemd-logind[1134]: New session 25 of user core. Dec 13 04:03:44.070538 systemd[1]: Started session-25.scope. Dec 13 04:03:44.272095 env[1142]: time="2024-12-13T04:03:44.271882621Z" level=info msg="CreateContainer within sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 04:03:44.313606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070257691.mount: Deactivated successfully. Dec 13 04:03:44.334732 env[1142]: time="2024-12-13T04:03:44.332973457Z" level=info msg="CreateContainer within sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\"" Dec 13 04:03:44.337373 env[1142]: time="2024-12-13T04:03:44.337276809Z" level=info msg="StartContainer for \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\"" Dec 13 04:03:44.368685 systemd[1]: Started cri-containerd-1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4.scope. Dec 13 04:03:44.383746 systemd[1]: cri-containerd-1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4.scope: Deactivated successfully. Dec 13 04:03:44.401130 env[1142]: time="2024-12-13T04:03:44.401003371Z" level=info msg="shim disconnected" id=1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4 Dec 13 04:03:44.401130 env[1142]: time="2024-12-13T04:03:44.401085404Z" level=warning msg="cleaning up after shim disconnected" id=1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4 namespace=k8s.io Dec 13 04:03:44.401130 env[1142]: time="2024-12-13T04:03:44.401100433Z" level=info msg="cleaning up dead shim" Dec 13 04:03:44.410590 env[1142]: time="2024-12-13T04:03:44.410511058Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3815 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:03:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:03:44.410924 env[1142]: time="2024-12-13T04:03:44.410849172Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Dec 13 04:03:44.413807 env[1142]: time="2024-12-13T04:03:44.413747751Z" level=error msg="Failed to pipe stdout of container \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\"" error="reading from a closed fifo" Dec 13 04:03:44.413998 env[1142]: time="2024-12-13T04:03:44.413969446Z" level=error msg="Failed to pipe stderr of container \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\"" error="reading from a closed fifo" Dec 13 04:03:44.418529 env[1142]: time="2024-12-13T04:03:44.418456655Z" level=error msg="StartContainer for \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:03:44.418840 kubelet[1973]: E1213 04:03:44.418803 1973 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4" Dec 13 04:03:44.419214 kubelet[1973]: E1213 04:03:44.418928 1973 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:03:44.419214 kubelet[1973]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:03:44.419214 kubelet[1973]: rm /hostbin/cilium-mount Dec 13 04:03:44.419214 kubelet[1973]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wrpwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-kkhst_kube-system(1095ca6a-1b5f-4847-b4ba-dcaac6c2656c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:03:44.419214 kubelet[1973]: E1213 04:03:44.418979 1973 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kkhst" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" Dec 13 04:03:44.749339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4-rootfs.mount: Deactivated successfully. Dec 13 04:03:44.891441 sshd[3700]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:44.894959 systemd[1]: sshd@24-172.24.4.115:22-172.24.4.1:54886.service: Deactivated successfully. Dec 13 04:03:44.896076 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 04:03:44.897520 systemd-logind[1134]: Session 25 logged out. Waiting for processes to exit. Dec 13 04:03:44.899476 systemd[1]: Started sshd@25-172.24.4.115:22-172.24.4.1:42846.service. Dec 13 04:03:44.903046 systemd-logind[1134]: Removed session 25. Dec 13 04:03:45.267269 kubelet[1973]: I1213 04:03:45.267212 1973 scope.go:117] "RemoveContainer" containerID="4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704" Dec 13 04:03:45.267739 kubelet[1973]: I1213 04:03:45.267704 1973 scope.go:117] "RemoveContainer" containerID="4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704" Dec 13 04:03:45.273625 env[1142]: time="2024-12-13T04:03:45.273043254Z" level=info msg="RemoveContainer for \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\"" Dec 13 04:03:45.276277 env[1142]: time="2024-12-13T04:03:45.275432938Z" level=info msg="RemoveContainer for \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\"" Dec 13 04:03:45.276277 env[1142]: time="2024-12-13T04:03:45.275617394Z" level=error msg="RemoveContainer for \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\" failed" error="failed to set removing state for container \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\": container is already in removing state" Dec 13 04:03:45.276498 kubelet[1973]: E1213 04:03:45.276133 1973 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\": container is already in removing state" containerID="4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704" Dec 13 04:03:45.276498 kubelet[1973]: E1213 04:03:45.276204 1973 kuberuntime_container.go:858] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704": container is already in removing state; Skipping pod "cilium-kkhst_kube-system(1095ca6a-1b5f-4847-b4ba-dcaac6c2656c)" Dec 13 04:03:45.283432 kubelet[1973]: E1213 04:03:45.276838 1973 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-kkhst_kube-system(1095ca6a-1b5f-4847-b4ba-dcaac6c2656c)\"" pod="kube-system/cilium-kkhst" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" Dec 13 04:03:45.283619 env[1142]: time="2024-12-13T04:03:45.281654941Z" level=info msg="RemoveContainer for \"4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704\" returns successfully" Dec 13 04:03:46.121246 sshd[3837]: Accepted publickey for core from 172.24.4.1 port 42846 ssh2: RSA SHA256:OkcE/e8cyiYfDhFAjIOhJbymiCk6iRYfYgj/ZDa0TCk Dec 13 04:03:46.126000 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:46.139796 systemd-logind[1134]: New session 26 of user core. Dec 13 04:03:46.142170 systemd[1]: Started session-26.scope. Dec 13 04:03:46.277706 env[1142]: time="2024-12-13T04:03:46.274899027Z" level=info msg="StopPodSandbox for \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\"" Dec 13 04:03:46.277706 env[1142]: time="2024-12-13T04:03:46.275057394Z" level=info msg="Container to stop \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:03:46.286810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002-shm.mount: Deactivated successfully. Dec 13 04:03:46.305267 systemd[1]: cri-containerd-a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002.scope: Deactivated successfully. Dec 13 04:03:46.354067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002-rootfs.mount: Deactivated successfully. Dec 13 04:03:46.406569 kubelet[1973]: W1213 04:03:46.399310 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1095ca6a_1b5f_4847_b4ba_dcaac6c2656c.slice/cri-containerd-4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704.scope WatchSource:0}: container "4121dfaa3c7d4eec2740ddf52b549aa101c444afd5db23ed01d54bc61eeea704" in namespace "k8s.io": not found Dec 13 04:03:46.546347 env[1142]: time="2024-12-13T04:03:46.546196967Z" level=info msg="shim disconnected" id=a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002 Dec 13 04:03:46.546347 env[1142]: time="2024-12-13T04:03:46.546301924Z" level=warning msg="cleaning up after shim disconnected" id=a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002 namespace=k8s.io Dec 13 04:03:46.546347 env[1142]: time="2024-12-13T04:03:46.546332250Z" level=info msg="cleaning up dead shim" Dec 13 04:03:46.586535 env[1142]: time="2024-12-13T04:03:46.586448139Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3858 runtime=io.containerd.runc.v2\n" Dec 13 04:03:46.587566 env[1142]: time="2024-12-13T04:03:46.587505373Z" level=info msg="TearDown network for sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" successfully" Dec 13 04:03:46.589887 env[1142]: time="2024-12-13T04:03:46.589831567Z" level=info msg="StopPodSandbox for \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" returns successfully" Dec 13 04:03:46.666505 kubelet[1973]: I1213 04:03:46.666367 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-bpf-maps\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666505 kubelet[1973]: I1213 04:03:46.666452 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-clustermesh-secrets\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666505 kubelet[1973]: I1213 04:03:46.666488 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hostproc\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666540 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrpwq\" (UniqueName: \"kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-kube-api-access-wrpwq\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666565 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cni-path\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666592 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-cgroup\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666633 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-config-path\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666689 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-xtables-lock\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666715 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-ipsec-secrets\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666750 kubelet[1973]: I1213 04:03:46.666736 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-kernel\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666951 kubelet[1973]: I1213 04:03:46.666772 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-net\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666951 kubelet[1973]: I1213 04:03:46.666797 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-lib-modules\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666951 kubelet[1973]: I1213 04:03:46.666817 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-etc-cni-netd\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666951 kubelet[1973]: I1213 04:03:46.666856 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-run\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.666951 kubelet[1973]: I1213 04:03:46.666882 1973 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hubble-tls\") pod \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\" (UID: \"1095ca6a-1b5f-4847-b4ba-dcaac6c2656c\") " Dec 13 04:03:46.669806 kubelet[1973]: I1213 04:03:46.669766 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:03:46.669941 kubelet[1973]: I1213 04:03:46.669922 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.670401 kubelet[1973]: I1213 04:03:46.670352 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.672496 kubelet[1973]: I1213 04:03:46.672464 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hostproc" (OuterVolumeSpecName: "hostproc") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.675227 systemd[1]: var-lib-kubelet-pods-1095ca6a\x2d1b5f\x2d4847\x2db4ba\x2ddcaac6c2656c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:03:46.679992 systemd[1]: var-lib-kubelet-pods-1095ca6a\x2d1b5f\x2d4847\x2db4ba\x2ddcaac6c2656c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 04:03:46.682642 systemd[1]: var-lib-kubelet-pods-1095ca6a\x2d1b5f\x2d4847\x2db4ba\x2ddcaac6c2656c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:03:46.692478 kubelet[1973]: I1213 04:03:46.692424 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.692618 kubelet[1973]: I1213 04:03:46.692499 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.692618 kubelet[1973]: I1213 04:03:46.692529 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.692788 kubelet[1973]: I1213 04:03:46.692756 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:46.692838 kubelet[1973]: I1213 04:03:46.692804 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.692889 kubelet[1973]: I1213 04:03:46.692862 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.692994 kubelet[1973]: I1213 04:03:46.692964 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-kube-api-access-wrpwq" (OuterVolumeSpecName: "kube-api-access-wrpwq") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "kube-api-access-wrpwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:03:46.693163 kubelet[1973]: I1213 04:03:46.693109 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:03:46.693522 kubelet[1973]: I1213 04:03:46.693483 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:03:46.695962 kubelet[1973]: I1213 04:03:46.695891 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cni-path" (OuterVolumeSpecName: "cni-path") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.696105 kubelet[1973]: I1213 04:03:46.696088 1973 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" (UID: "1095ca6a-1b5f-4847-b4ba-dcaac6c2656c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:03:46.768193 kubelet[1973]: I1213 04:03:46.768136 1973 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hubble-tls\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.768575 kubelet[1973]: I1213 04:03:46.768546 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-run\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.768803 kubelet[1973]: I1213 04:03:46.768778 1973 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-bpf-maps\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.769061 kubelet[1973]: I1213 04:03:46.769034 1973 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wrpwq\" (UniqueName: \"kubernetes.io/projected/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-kube-api-access-wrpwq\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769286 1973 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-clustermesh-secrets\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769327 1973 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-hostproc\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769357 1973 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cni-path\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769387 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-cgroup\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769438 1973 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-xtables-lock\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769470 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-ipsec-secrets\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769530 1973 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-cilium-config-path\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769562 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-kernel\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769593 1973 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-host-proc-sys-net\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769622 1973 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-lib-modules\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:46.778151 kubelet[1973]: I1213 04:03:46.769650 1973 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c-etc-cni-netd\") on node \"ci-3510-3-6-f-1413c5ec2e.novalocal\" DevicePath \"\"" Dec 13 04:03:47.284794 kubelet[1973]: I1213 04:03:47.283529 1973 scope.go:117] "RemoveContainer" containerID="1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4" Dec 13 04:03:47.283793 systemd[1]: var-lib-kubelet-pods-1095ca6a\x2d1b5f\x2d4847\x2db4ba\x2ddcaac6c2656c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrpwq.mount: Deactivated successfully. Dec 13 04:03:47.291720 env[1142]: time="2024-12-13T04:03:47.290875378Z" level=info msg="RemoveContainer for \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\"" Dec 13 04:03:47.303307 systemd[1]: Removed slice kubepods-burstable-pod1095ca6a_1b5f_4847_b4ba_dcaac6c2656c.slice. Dec 13 04:03:47.382418 env[1142]: time="2024-12-13T04:03:47.382286846Z" level=info msg="RemoveContainer for \"1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4\" returns successfully" Dec 13 04:03:47.935024 kubelet[1973]: I1213 04:03:47.934967 1973 topology_manager.go:215] "Topology Admit Handler" podUID="94d37689-803f-408b-96af-f0b3757b2a45" podNamespace="kube-system" podName="cilium-smds5" Dec 13 04:03:47.935934 kubelet[1973]: E1213 04:03:47.935900 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" containerName="mount-cgroup" Dec 13 04:03:47.936182 kubelet[1973]: I1213 04:03:47.936156 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" containerName="mount-cgroup" Dec 13 04:03:47.936359 kubelet[1973]: I1213 04:03:47.936335 1973 memory_manager.go:354] "RemoveStaleState removing state" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" containerName="mount-cgroup" Dec 13 04:03:47.936550 kubelet[1973]: E1213 04:03:47.936526 1973 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" containerName="mount-cgroup" Dec 13 04:03:47.949497 systemd[1]: Created slice kubepods-burstable-pod94d37689_803f_408b_96af_f0b3757b2a45.slice. Dec 13 04:03:47.981197 kubelet[1973]: I1213 04:03:47.981098 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-host-proc-sys-kernel\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.981197 kubelet[1973]: I1213 04:03:47.981211 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-host-proc-sys-net\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.981586 kubelet[1973]: I1213 04:03:47.981281 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-hostproc\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.981586 kubelet[1973]: I1213 04:03:47.981342 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-cilium-cgroup\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.981586 kubelet[1973]: I1213 04:03:47.981403 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-cni-path\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.981586 kubelet[1973]: I1213 04:03:47.981458 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-etc-cni-netd\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.981586 kubelet[1973]: I1213 04:03:47.981516 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94d37689-803f-408b-96af-f0b3757b2a45-hubble-tls\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.981618 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-bpf-maps\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.981762 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-xtables-lock\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.981836 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mf8h\" (UniqueName: \"kubernetes.io/projected/94d37689-803f-408b-96af-f0b3757b2a45-kube-api-access-8mf8h\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.981896 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94d37689-803f-408b-96af-f0b3757b2a45-clustermesh-secrets\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.981957 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94d37689-803f-408b-96af-f0b3757b2a45-cilium-ipsec-secrets\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.982015 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-cilium-run\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.982080 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94d37689-803f-408b-96af-f0b3757b2a45-lib-modules\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:47.982203 kubelet[1973]: I1213 04:03:47.982144 1973 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94d37689-803f-408b-96af-f0b3757b2a45-cilium-config-path\") pod \"cilium-smds5\" (UID: \"94d37689-803f-408b-96af-f0b3757b2a45\") " pod="kube-system/cilium-smds5" Dec 13 04:03:48.258604 env[1142]: time="2024-12-13T04:03:48.257924284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-smds5,Uid:94d37689-803f-408b-96af-f0b3757b2a45,Namespace:kube-system,Attempt:0,}" Dec 13 04:03:48.278310 kubelet[1973]: I1213 04:03:48.278236 1973 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1095ca6a-1b5f-4847-b4ba-dcaac6c2656c" path="/var/lib/kubelet/pods/1095ca6a-1b5f-4847-b4ba-dcaac6c2656c/volumes" Dec 13 04:03:48.409254 kubelet[1973]: E1213 04:03:48.409200 1973 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:03:48.522703 env[1142]: time="2024-12-13T04:03:48.522407422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:03:48.523354 env[1142]: time="2024-12-13T04:03:48.522505907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:03:48.523354 env[1142]: time="2024-12-13T04:03:48.522540932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:03:48.523790 env[1142]: time="2024-12-13T04:03:48.523585913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46 pid=3893 runtime=io.containerd.runc.v2 Dec 13 04:03:48.581746 systemd[1]: run-containerd-runc-k8s.io-b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46-runc.2IOtwh.mount: Deactivated successfully. Dec 13 04:03:48.588836 systemd[1]: Started cri-containerd-b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46.scope. Dec 13 04:03:48.633975 env[1142]: time="2024-12-13T04:03:48.633876883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-smds5,Uid:94d37689-803f-408b-96af-f0b3757b2a45,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\"" Dec 13 04:03:48.639921 env[1142]: time="2024-12-13T04:03:48.639822319Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:03:48.686652 env[1142]: time="2024-12-13T04:03:48.686555900Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6\"" Dec 13 04:03:48.692257 env[1142]: time="2024-12-13T04:03:48.692150337Z" level=info msg="StartContainer for \"9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6\"" Dec 13 04:03:48.737905 systemd[1]: Started cri-containerd-9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6.scope. Dec 13 04:03:48.789612 env[1142]: time="2024-12-13T04:03:48.789452766Z" level=info msg="StartContainer for \"9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6\" returns successfully" Dec 13 04:03:48.803801 systemd[1]: cri-containerd-9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6.scope: Deactivated successfully. Dec 13 04:03:48.848769 env[1142]: time="2024-12-13T04:03:48.848614086Z" level=info msg="shim disconnected" id=9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6 Dec 13 04:03:48.849321 env[1142]: time="2024-12-13T04:03:48.848767383Z" level=warning msg="cleaning up after shim disconnected" id=9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6 namespace=k8s.io Dec 13 04:03:48.849321 env[1142]: time="2024-12-13T04:03:48.848802890Z" level=info msg="cleaning up dead shim" Dec 13 04:03:48.857757 env[1142]: time="2024-12-13T04:03:48.857706507Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Dec 13 04:03:49.306236 env[1142]: time="2024-12-13T04:03:49.306152680Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:03:49.501753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760044476.mount: Deactivated successfully. Dec 13 04:03:49.512324 kubelet[1973]: W1213 04:03:49.512251 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1095ca6a_1b5f_4847_b4ba_dcaac6c2656c.slice/cri-containerd-1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4.scope WatchSource:0}: container "1b5e74defcffd56506bd60ab6645194bfc4baa0ac42fed42e7f9ea4e1634c4c4" in namespace "k8s.io": not found Dec 13 04:03:49.555500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297361414.mount: Deactivated successfully. Dec 13 04:03:49.583107 env[1142]: time="2024-12-13T04:03:49.582843340Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4\"" Dec 13 04:03:49.588262 env[1142]: time="2024-12-13T04:03:49.587248626Z" level=info msg="StartContainer for \"63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4\"" Dec 13 04:03:49.650631 systemd[1]: Started cri-containerd-63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4.scope. Dec 13 04:03:49.688143 env[1142]: time="2024-12-13T04:03:49.688055486Z" level=info msg="StartContainer for \"63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4\" returns successfully" Dec 13 04:03:49.697162 systemd[1]: cri-containerd-63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4.scope: Deactivated successfully. Dec 13 04:03:49.730582 env[1142]: time="2024-12-13T04:03:49.730479594Z" level=info msg="shim disconnected" id=63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4 Dec 13 04:03:49.730582 env[1142]: time="2024-12-13T04:03:49.730576536Z" level=warning msg="cleaning up after shim disconnected" id=63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4 namespace=k8s.io Dec 13 04:03:49.730978 env[1142]: time="2024-12-13T04:03:49.730594129Z" level=info msg="cleaning up dead shim" Dec 13 04:03:49.740049 env[1142]: time="2024-12-13T04:03:49.739973779Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4039 runtime=io.containerd.runc.v2\n" Dec 13 04:03:50.317720 env[1142]: time="2024-12-13T04:03:50.316222917Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:03:50.500944 systemd[1]: run-containerd-runc-k8s.io-63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4-runc.H0NqFm.mount: Deactivated successfully. Dec 13 04:03:50.501184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4-rootfs.mount: Deactivated successfully. Dec 13 04:03:50.549323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472150830.mount: Deactivated successfully. Dec 13 04:03:50.576242 env[1142]: time="2024-12-13T04:03:50.576010354Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e\"" Dec 13 04:03:50.580344 env[1142]: time="2024-12-13T04:03:50.580235412Z" level=info msg="StartContainer for \"2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e\"" Dec 13 04:03:50.633803 systemd[1]: Started cri-containerd-2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e.scope. Dec 13 04:03:50.689708 env[1142]: time="2024-12-13T04:03:50.689597492Z" level=info msg="StartContainer for \"2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e\" returns successfully" Dec 13 04:03:50.700645 systemd[1]: cri-containerd-2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e.scope: Deactivated successfully. Dec 13 04:03:50.743340 env[1142]: time="2024-12-13T04:03:50.743232119Z" level=info msg="shim disconnected" id=2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e Dec 13 04:03:50.743340 env[1142]: time="2024-12-13T04:03:50.743309394Z" level=warning msg="cleaning up after shim disconnected" id=2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e namespace=k8s.io Dec 13 04:03:50.743340 env[1142]: time="2024-12-13T04:03:50.743324352Z" level=info msg="cleaning up dead shim" Dec 13 04:03:50.753032 env[1142]: time="2024-12-13T04:03:50.752940878Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4094 runtime=io.containerd.runc.v2\n" Dec 13 04:03:51.347707 env[1142]: time="2024-12-13T04:03:51.345585498Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:03:51.382901 env[1142]: time="2024-12-13T04:03:51.382618173Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76\"" Dec 13 04:03:51.384866 env[1142]: time="2024-12-13T04:03:51.384796070Z" level=info msg="StartContainer for \"d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76\"" Dec 13 04:03:51.426249 systemd[1]: Started cri-containerd-d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76.scope. Dec 13 04:03:51.462183 systemd[1]: cri-containerd-d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76.scope: Deactivated successfully. Dec 13 04:03:51.470260 env[1142]: time="2024-12-13T04:03:51.469599059Z" level=info msg="StartContainer for \"d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76\" returns successfully" Dec 13 04:03:51.500099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e-rootfs.mount: Deactivated successfully. Dec 13 04:03:51.501864 env[1142]: time="2024-12-13T04:03:51.501772676Z" level=info msg="shim disconnected" id=d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76 Dec 13 04:03:51.502004 env[1142]: time="2024-12-13T04:03:51.501985185Z" level=warning msg="cleaning up after shim disconnected" id=d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76 namespace=k8s.io Dec 13 04:03:51.502078 env[1142]: time="2024-12-13T04:03:51.502063732Z" level=info msg="cleaning up dead shim" Dec 13 04:03:51.510954 env[1142]: time="2024-12-13T04:03:51.510891568Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:03:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4152 runtime=io.containerd.runc.v2\n" Dec 13 04:03:52.344055 env[1142]: time="2024-12-13T04:03:52.342705443Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:03:52.548230 env[1142]: time="2024-12-13T04:03:52.548132776Z" level=info msg="CreateContainer within sandbox \"b2c0d9001de436fbe72975e057881f2331eeedacfb0f00535f0ac2acc5374e46\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc\"" Dec 13 04:03:52.549324 env[1142]: time="2024-12-13T04:03:52.549267366Z" level=info msg="StartContainer for \"bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc\"" Dec 13 04:03:52.604590 systemd[1]: run-containerd-runc-k8s.io-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc-runc.91PBD9.mount: Deactivated successfully. Dec 13 04:03:52.610572 systemd[1]: Started cri-containerd-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc.scope. Dec 13 04:03:52.658226 kubelet[1973]: W1213 04:03:52.657935 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d37689_803f_408b_96af_f0b3757b2a45.slice/cri-containerd-9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6.scope WatchSource:0}: task 9ebbe8da5066e80426d48a5fa1a73ba4d93161bd9215a4f96c757933e5417af6 not found: not found Dec 13 04:03:52.667491 env[1142]: time="2024-12-13T04:03:52.667172681Z" level=info msg="StartContainer for \"bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc\" returns successfully" Dec 13 04:03:54.075718 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 04:03:54.133712 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 04:03:55.741782 systemd[1]: run-containerd-runc-k8s.io-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc-runc.nUjzlW.mount: Deactivated successfully. Dec 13 04:03:55.771699 kubelet[1973]: W1213 04:03:55.769449 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d37689_803f_408b_96af_f0b3757b2a45.slice/cri-containerd-63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4.scope WatchSource:0}: task 63bd6870ae9f0352e9d90dfd1a091494a35b16f03f1613764ff2fb3befd8f2e4 not found: not found Dec 13 04:03:57.949448 systemd[1]: run-containerd-runc-k8s.io-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc-runc.23rRkh.mount: Deactivated successfully. Dec 13 04:03:58.183130 systemd-networkd[971]: lxc_health: Link UP Dec 13 04:03:58.199149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:03:58.198312 systemd-networkd[971]: lxc_health: Gained carrier Dec 13 04:03:58.297377 kubelet[1973]: I1213 04:03:58.297312 1973 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-smds5" podStartSLOduration=11.297245073 podStartE2EDuration="11.297245073s" podCreationTimestamp="2024-12-13 04:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:03:53.439030233 +0000 UTC m=+165.355773235" watchObservedRunningTime="2024-12-13 04:03:58.297245073 +0000 UTC m=+170.213988025" Dec 13 04:03:58.878986 kubelet[1973]: W1213 04:03:58.878914 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d37689_803f_408b_96af_f0b3757b2a45.slice/cri-containerd-2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e.scope WatchSource:0}: task 2913a26d45f4956002c3bd5e355da7c0cd16071654e00786ac8313b2c46dd92e not found: not found Dec 13 04:03:59.846089 systemd-networkd[971]: lxc_health: Gained IPv6LL Dec 13 04:04:00.169059 systemd[1]: run-containerd-runc-k8s.io-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc-runc.VBMDNN.mount: Deactivated successfully. Dec 13 04:04:01.996989 kubelet[1973]: W1213 04:04:01.996885 1973 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d37689_803f_408b_96af_f0b3757b2a45.slice/cri-containerd-d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76.scope WatchSource:0}: task d3b697c0b90cfa6efbace13910e9b3f7ff5481f005d5ebb7aba6a906bebd1c76 not found: not found Dec 13 04:04:02.409050 systemd[1]: run-containerd-runc-k8s.io-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc-runc.Pr3rVC.mount: Deactivated successfully. Dec 13 04:04:04.772483 systemd[1]: run-containerd-runc-k8s.io-bd50ca73ac90af244fb865f86ce4b17d23e0f518b60e17cf01888fdbc6d742fc-runc.TE6j2v.mount: Deactivated successfully. Dec 13 04:04:05.051999 sshd[3837]: pam_unix(sshd:session): session closed for user core Dec 13 04:04:05.066042 systemd[1]: sshd@25-172.24.4.115:22-172.24.4.1:42846.service: Deactivated successfully. Dec 13 04:04:05.067783 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 04:04:05.069516 systemd-logind[1134]: Session 26 logged out. Waiting for processes to exit. Dec 13 04:04:05.073214 systemd-logind[1134]: Removed session 26. Dec 13 04:04:08.294369 env[1142]: time="2024-12-13T04:04:08.294119160Z" level=info msg="StopPodSandbox for \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\"" Dec 13 04:04:08.300963 env[1142]: time="2024-12-13T04:04:08.294460892Z" level=info msg="TearDown network for sandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" successfully" Dec 13 04:04:08.300963 env[1142]: time="2024-12-13T04:04:08.294549599Z" level=info msg="StopPodSandbox for \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" returns successfully" Dec 13 04:04:08.300963 env[1142]: time="2024-12-13T04:04:08.296608033Z" level=info msg="RemovePodSandbox for \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\"" Dec 13 04:04:08.300963 env[1142]: time="2024-12-13T04:04:08.296723320Z" level=info msg="Forcibly stopping sandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\"" Dec 13 04:04:08.300963 env[1142]: time="2024-12-13T04:04:08.296917485Z" level=info msg="TearDown network for sandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" successfully" Dec 13 04:04:08.662775 env[1142]: time="2024-12-13T04:04:08.661950507Z" level=info msg="RemovePodSandbox \"2d29045ac6afc264b4e07d461c6a04c478fd00eb82cf04ed64c279988203139a\" returns successfully" Dec 13 04:04:08.664323 env[1142]: time="2024-12-13T04:04:08.663867906Z" level=info msg="StopPodSandbox for \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\"" Dec 13 04:04:08.664323 env[1142]: time="2024-12-13T04:04:08.664075776Z" level=info msg="TearDown network for sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" successfully" Dec 13 04:04:08.664323 env[1142]: time="2024-12-13T04:04:08.664161828Z" level=info msg="StopPodSandbox for \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" returns successfully" Dec 13 04:04:08.665299 env[1142]: time="2024-12-13T04:04:08.665069332Z" level=info msg="RemovePodSandbox for \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\"" Dec 13 04:04:08.665299 env[1142]: time="2024-12-13T04:04:08.665134975Z" level=info msg="Forcibly stopping sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\"" Dec 13 04:04:08.665299 env[1142]: time="2024-12-13T04:04:08.665283423Z" level=info msg="TearDown network for sandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" successfully" Dec 13 04:04:08.674936 env[1142]: time="2024-12-13T04:04:08.674830298Z" level=info msg="RemovePodSandbox \"a52f6e001c63a12e8b4a0b0a492beac825d5d13f90a0a42da8ec2d3799849002\" returns successfully" Dec 13 04:04:08.675948 env[1142]: time="2024-12-13T04:04:08.675649846Z" level=info msg="StopPodSandbox for \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\"" Dec 13 04:04:08.676140 env[1142]: time="2024-12-13T04:04:08.676043455Z" level=info msg="TearDown network for sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" successfully" Dec 13 04:04:08.676140 env[1142]: time="2024-12-13T04:04:08.676127232Z" level=info msg="StopPodSandbox for \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" returns successfully" Dec 13 04:04:08.676867 env[1142]: time="2024-12-13T04:04:08.676776061Z" level=info msg="RemovePodSandbox for \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\"" Dec 13 04:04:08.677199 env[1142]: time="2024-12-13T04:04:08.676843597Z" level=info msg="Forcibly stopping sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\"" Dec 13 04:04:08.677199 env[1142]: time="2024-12-13T04:04:08.676983831Z" level=info msg="TearDown network for sandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" successfully" Dec 13 04:04:08.682838 env[1142]: time="2024-12-13T04:04:08.682651672Z" level=info msg="RemovePodSandbox \"e4687c55ee8138cd893e24c7133fb0d6a1791d227e82cb5f0ab9510dd2a0246b\" returns successfully"