Feb 12 20:43:19.961090 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:43:19.961109 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:43:19.961121 kernel: BIOS-provided physical RAM map: Feb 12 20:43:19.961128 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:43:19.961134 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:43:19.961141 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:43:19.961149 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 12 20:43:19.961156 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 12 20:43:19.961164 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:43:19.961170 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:43:19.961177 kernel: NX (Execute Disable) protection: active Feb 12 20:43:19.961183 kernel: SMBIOS 2.8 present. Feb 12 20:43:19.961190 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 12 20:43:19.961196 kernel: Hypervisor detected: KVM Feb 12 20:43:19.961205 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:43:19.961213 kernel: kvm-clock: cpu 0, msr 40faa001, primary cpu clock Feb 12 20:43:19.961220 kernel: kvm-clock: using sched offset of 6035007432 cycles Feb 12 20:43:19.961228 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:43:19.961235 kernel: tsc: Detected 1996.249 MHz processor Feb 12 20:43:19.961243 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:43:19.961251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:43:19.961258 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 12 20:43:19.961266 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:43:19.961275 kernel: ACPI: Early table checksum verification disabled Feb 12 20:43:19.961282 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 12 20:43:19.961289 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:43:19.961297 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:43:19.961304 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:43:19.961311 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 20:43:19.961318 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:43:19.961326 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:43:19.961333 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 12 20:43:19.961343 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 12 20:43:19.961350 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 20:43:19.961357 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 12 20:43:19.961364 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 12 20:43:19.961372 kernel: No NUMA configuration found Feb 12 20:43:19.961379 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 12 20:43:19.961386 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 12 20:43:19.961394 kernel: Zone ranges: Feb 12 20:43:19.961405 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:43:19.961413 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 12 20:43:19.961420 kernel: Normal empty Feb 12 20:43:19.961428 kernel: Movable zone start for each node Feb 12 20:43:19.961435 kernel: Early memory node ranges Feb 12 20:43:19.961443 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:43:19.961452 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 12 20:43:19.961459 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 12 20:43:19.961467 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:43:19.961474 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:43:19.961482 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 12 20:43:19.961489 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:43:19.961497 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:43:19.961504 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:43:19.961512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:43:19.961521 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:43:19.961529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:43:19.961536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:43:19.961554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:43:19.961562 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:43:19.961570 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 20:43:19.961577 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 20:43:19.961585 kernel: Booting paravirtualized kernel on KVM Feb 12 20:43:19.961593 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:43:19.961600 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 20:43:19.961610 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 20:43:19.961618 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 20:43:19.961625 kernel: pcpu-alloc: [0] 0 1 Feb 12 20:43:19.961632 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 20:43:19.961640 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 20:43:19.961650 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 12 20:43:19.961658 kernel: Policy zone: DMA32 Feb 12 20:43:19.961667 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:43:19.961678 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:43:19.961686 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:43:19.961694 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 20:43:19.961702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:43:19.963757 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 20:43:19.963774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:43:19.963783 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:43:19.963792 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:43:19.963805 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:43:19.963815 kernel: rcu: RCU event tracing is enabled. Feb 12 20:43:19.963824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:43:19.963834 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:43:19.963847 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:43:19.963861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:43:19.963875 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:43:19.963887 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 20:43:19.963896 kernel: Console: colour VGA+ 80x25 Feb 12 20:43:19.963908 kernel: printk: console [tty0] enabled Feb 12 20:43:19.963917 kernel: printk: console [ttyS0] enabled Feb 12 20:43:19.963927 kernel: ACPI: Core revision 20210730 Feb 12 20:43:19.963936 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:43:19.963945 kernel: x2apic enabled Feb 12 20:43:19.963954 kernel: Switched APIC routing to physical x2apic. Feb 12 20:43:19.963963 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:43:19.963972 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:43:19.963981 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 12 20:43:19.963991 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 20:43:19.964002 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 20:43:19.964011 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:43:19.964021 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:43:19.964033 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:43:19.964322 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:43:19.964333 kernel: Speculative Store Bypass: Vulnerable Feb 12 20:43:19.964342 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 12 20:43:19.964352 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:43:19.964361 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:43:19.964379 kernel: LSM: Security Framework initializing Feb 12 20:43:19.964387 kernel: SELinux: Initializing. Feb 12 20:43:19.964397 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:43:19.964406 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:43:19.964415 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 12 20:43:19.964424 kernel: Performance Events: AMD PMU driver. Feb 12 20:43:19.964433 kernel: ... version: 0 Feb 12 20:43:19.964469 kernel: ... bit width: 48 Feb 12 20:43:19.964480 kernel: ... generic registers: 4 Feb 12 20:43:19.964502 kernel: ... value mask: 0000ffffffffffff Feb 12 20:43:19.964512 kernel: ... max period: 00007fffffffffff Feb 12 20:43:19.964523 kernel: ... fixed-purpose events: 0 Feb 12 20:43:19.964533 kernel: ... event mask: 000000000000000f Feb 12 20:43:19.964543 kernel: signal: max sigframe size: 1440 Feb 12 20:43:19.964552 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:43:19.964561 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:43:19.964571 kernel: x86: Booting SMP configuration: Feb 12 20:43:19.964583 kernel: .... node #0, CPUs: #1 Feb 12 20:43:19.964593 kernel: kvm-clock: cpu 1, msr 40faa041, secondary cpu clock Feb 12 20:43:19.964602 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 20:43:19.964611 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:43:19.964621 kernel: smpboot: Max logical packages: 2 Feb 12 20:43:19.964631 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 12 20:43:19.964640 kernel: devtmpfs: initialized Feb 12 20:43:19.964650 kernel: x86/mm: Memory block size: 128MB Feb 12 20:43:19.964668 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:43:19.964694 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:43:19.964706 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:43:19.967776 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:43:19.967786 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:43:19.967795 kernel: audit: type=2000 audit(1707770599.139:1): state=initialized audit_enabled=0 res=1 Feb 12 20:43:19.967804 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:43:19.967813 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:43:19.967822 kernel: cpuidle: using governor menu Feb 12 20:43:19.967831 kernel: ACPI: bus type PCI registered Feb 12 20:43:19.967844 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:43:19.967853 kernel: dca service started, version 1.12.1 Feb 12 20:43:19.967861 kernel: PCI: Using configuration type 1 for base access Feb 12 20:43:19.967871 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:43:19.967880 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:43:19.967888 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:43:19.967897 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:43:19.967906 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:43:19.967915 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:43:19.967925 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:43:19.967934 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:43:19.967943 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:43:19.967952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:43:19.967961 kernel: ACPI: Interpreter enabled Feb 12 20:43:19.967970 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:43:19.967979 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:43:19.967988 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:43:19.967997 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:43:19.968009 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:43:19.968192 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:43:19.968287 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 20:43:19.968300 kernel: acpiphp: Slot [3] registered Feb 12 20:43:19.968310 kernel: acpiphp: Slot [4] registered Feb 12 20:43:19.968319 kernel: acpiphp: Slot [5] registered Feb 12 20:43:19.968327 kernel: acpiphp: Slot [6] registered Feb 12 20:43:19.968339 kernel: acpiphp: Slot [7] registered Feb 12 20:43:19.968348 kernel: acpiphp: Slot [8] registered Feb 12 20:43:19.968357 kernel: acpiphp: Slot [9] registered Feb 12 20:43:19.968365 kernel: acpiphp: Slot [10] registered Feb 12 20:43:19.968374 kernel: acpiphp: Slot [11] registered Feb 12 20:43:19.968383 kernel: acpiphp: Slot [12] registered Feb 12 20:43:19.968392 kernel: acpiphp: Slot [13] registered Feb 12 20:43:19.968401 kernel: acpiphp: Slot [14] registered Feb 12 20:43:19.968410 kernel: acpiphp: Slot [15] registered Feb 12 20:43:19.968418 kernel: acpiphp: Slot [16] registered Feb 12 20:43:19.968429 kernel: acpiphp: Slot [17] registered Feb 12 20:43:19.968438 kernel: acpiphp: Slot [18] registered Feb 12 20:43:19.968446 kernel: acpiphp: Slot [19] registered Feb 12 20:43:19.968455 kernel: acpiphp: Slot [20] registered Feb 12 20:43:19.968464 kernel: acpiphp: Slot [21] registered Feb 12 20:43:19.968473 kernel: acpiphp: Slot [22] registered Feb 12 20:43:19.968482 kernel: acpiphp: Slot [23] registered Feb 12 20:43:19.968491 kernel: acpiphp: Slot [24] registered Feb 12 20:43:19.968500 kernel: acpiphp: Slot [25] registered Feb 12 20:43:19.968510 kernel: acpiphp: Slot [26] registered Feb 12 20:43:19.968519 kernel: acpiphp: Slot [27] registered Feb 12 20:43:19.968528 kernel: acpiphp: Slot [28] registered Feb 12 20:43:19.968537 kernel: acpiphp: Slot [29] registered Feb 12 20:43:19.968546 kernel: acpiphp: Slot [30] registered Feb 12 20:43:19.968554 kernel: acpiphp: Slot [31] registered Feb 12 20:43:19.968563 kernel: PCI host bridge to bus 0000:00 Feb 12 20:43:19.968672 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:43:19.968817 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:43:19.968911 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:43:19.968991 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 20:43:19.969071 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:43:19.969182 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:43:19.969292 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:43:19.969393 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:43:19.969504 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:43:19.969608 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 12 20:43:19.969693 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:43:19.969795 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:43:19.969879 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:43:19.969969 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:43:19.970063 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:43:19.970150 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:43:19.970232 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:43:19.970326 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 20:43:19.970416 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 20:43:19.970504 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 20:43:19.970592 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 12 20:43:19.970684 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 12 20:43:19.973810 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:43:19.973906 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:43:19.973991 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 12 20:43:19.974075 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 12 20:43:19.974158 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 20:43:19.974241 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 12 20:43:19.974335 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:43:19.974418 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:43:19.974500 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 12 20:43:19.974580 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 20:43:19.974674 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 20:43:19.975024 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 12 20:43:19.975110 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 20:43:19.975203 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:43:19.975284 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 12 20:43:19.975364 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 20:43:19.975376 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:43:19.975384 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:43:19.975392 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:43:19.975400 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:43:19.975408 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:43:19.975419 kernel: iommu: Default domain type: Translated Feb 12 20:43:19.975427 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:43:19.975506 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:43:19.975587 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:43:19.975666 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:43:19.975678 kernel: vgaarb: loaded Feb 12 20:43:19.975686 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:43:19.975695 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:43:19.975703 kernel: PTP clock support registered Feb 12 20:43:19.975729 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:43:19.975737 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:43:19.975745 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:43:19.975753 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 12 20:43:19.975761 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:43:19.975769 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:43:19.975777 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:43:19.975785 kernel: pnp: PnP ACPI init Feb 12 20:43:19.975871 kernel: pnp 00:03: [dma 2] Feb 12 20:43:19.975887 kernel: pnp: PnP ACPI: found 5 devices Feb 12 20:43:19.975895 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:43:19.975903 kernel: NET: Registered PF_INET protocol family Feb 12 20:43:19.975912 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:43:19.975920 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 20:43:19.975928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:43:19.975936 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:43:19.975944 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 20:43:19.975954 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 20:43:19.975962 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:43:19.975970 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:43:19.975978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:43:19.975986 kernel: NET: Registered PF_XDP protocol family Feb 12 20:43:19.976071 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:43:19.976153 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:43:19.976225 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:43:19.976295 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 20:43:19.976369 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:43:19.976449 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:43:19.976531 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:43:19.976611 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:43:19.976623 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:43:19.976631 kernel: Initialise system trusted keyrings Feb 12 20:43:19.976639 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 20:43:19.976650 kernel: Key type asymmetric registered Feb 12 20:43:19.976658 kernel: Asymmetric key parser 'x509' registered Feb 12 20:43:19.976666 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:43:19.976674 kernel: io scheduler mq-deadline registered Feb 12 20:43:19.976682 kernel: io scheduler kyber registered Feb 12 20:43:19.976690 kernel: io scheduler bfq registered Feb 12 20:43:19.976698 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:43:19.977737 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 20:43:19.977754 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:43:19.977763 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 20:43:19.977775 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:43:19.977784 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:43:19.977794 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:43:19.977803 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:43:19.977812 kernel: random: crng init done Feb 12 20:43:19.977820 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:43:19.977830 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:43:19.977839 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:43:19.977945 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 12 20:43:19.978032 kernel: rtc_cmos 00:04: registered as rtc0 Feb 12 20:43:19.978109 kernel: rtc_cmos 00:04: setting system clock to 2024-02-12T20:43:19 UTC (1707770599) Feb 12 20:43:19.978186 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 12 20:43:19.978198 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:43:19.978207 kernel: Segment Routing with IPv6 Feb 12 20:43:19.978216 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:43:19.978226 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:43:19.978235 kernel: Key type dns_resolver registered Feb 12 20:43:19.978246 kernel: IPI shorthand broadcast: enabled Feb 12 20:43:19.978255 kernel: sched_clock: Marking stable (707632944, 117517512)->(856323445, -31172989) Feb 12 20:43:19.978264 kernel: registered taskstats version 1 Feb 12 20:43:19.978273 kernel: Loading compiled-in X.509 certificates Feb 12 20:43:19.978282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:43:19.978291 kernel: Key type .fscrypt registered Feb 12 20:43:19.978300 kernel: Key type fscrypt-provisioning registered Feb 12 20:43:19.978309 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:43:19.978320 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:43:19.978328 kernel: ima: No architecture policies found Feb 12 20:43:19.978337 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:43:19.978346 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:43:19.978355 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:43:19.978364 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:43:19.978373 kernel: Run /init as init process Feb 12 20:43:19.978382 kernel: with arguments: Feb 12 20:43:19.978390 kernel: /init Feb 12 20:43:19.978401 kernel: with environment: Feb 12 20:43:19.978409 kernel: HOME=/ Feb 12 20:43:19.978418 kernel: TERM=linux Feb 12 20:43:19.978427 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:43:19.978438 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:43:19.978450 systemd[1]: Detected virtualization kvm. Feb 12 20:43:19.978460 systemd[1]: Detected architecture x86-64. Feb 12 20:43:19.978469 systemd[1]: Running in initrd. Feb 12 20:43:19.978481 systemd[1]: No hostname configured, using default hostname. Feb 12 20:43:19.978490 systemd[1]: Hostname set to . Feb 12 20:43:19.978500 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:43:19.978509 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:43:19.978519 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:43:19.978528 systemd[1]: Reached target cryptsetup.target. Feb 12 20:43:19.978538 systemd[1]: Reached target paths.target. Feb 12 20:43:19.978548 systemd[1]: Reached target slices.target. Feb 12 20:43:19.978559 systemd[1]: Reached target swap.target. Feb 12 20:43:19.978568 systemd[1]: Reached target timers.target. Feb 12 20:43:19.978578 systemd[1]: Listening on iscsid.socket. Feb 12 20:43:19.978587 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:43:19.978597 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:43:19.978607 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:43:19.978616 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:43:19.978627 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:43:19.978637 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:43:19.978646 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:43:19.978656 systemd[1]: Reached target sockets.target. Feb 12 20:43:19.978666 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:43:19.978682 systemd[1]: Finished network-cleanup.service. Feb 12 20:43:19.978694 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:43:19.978706 systemd[1]: Starting systemd-journald.service... Feb 12 20:43:19.979766 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:43:19.979775 systemd[1]: Starting systemd-resolved.service... Feb 12 20:43:19.979784 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:43:19.979793 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:43:19.979802 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:43:19.979813 systemd-journald[186]: Journal started Feb 12 20:43:19.979868 systemd-journald[186]: Runtime Journal (/run/log/journal/57c1ddd9f9e24799bf61e4aa041365fc) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:43:19.964503 systemd-modules-load[187]: Inserted module 'overlay' Feb 12 20:43:20.023993 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:43:20.024013 kernel: Bridge firewalling registered Feb 12 20:43:20.024029 systemd[1]: Started systemd-journald.service. Feb 12 20:43:20.024042 kernel: audit: type=1130 audit(1707770600.016:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.024053 kernel: SCSI subsystem initialized Feb 12 20:43:20.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.001162 systemd-resolved[188]: Positive Trust Anchors: Feb 12 20:43:20.001179 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:43:20.001217 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:43:20.005000 systemd-modules-load[187]: Inserted module 'br_netfilter' Feb 12 20:43:20.008382 systemd-resolved[188]: Defaulting to hostname 'linux'. Feb 12 20:43:20.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.029109 systemd[1]: Started systemd-resolved.service. Feb 12 20:43:20.040095 kernel: audit: type=1130 audit(1707770600.028:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.040123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:43:20.040135 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:43:20.040147 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:43:20.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.044075 systemd-modules-load[187]: Inserted module 'dm_multipath' Feb 12 20:43:20.044729 kernel: audit: type=1130 audit(1707770600.039:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.045003 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:43:20.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.046178 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:43:20.050130 kernel: audit: type=1130 audit(1707770600.045:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.050805 systemd[1]: Reached target nss-lookup.target. Feb 12 20:43:20.055202 kernel: audit: type=1130 audit(1707770600.049:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.055471 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:43:20.056647 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:43:20.057782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:43:20.071077 kernel: audit: type=1130 audit(1707770600.069:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.069816 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:43:20.070516 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:43:20.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.079790 kernel: audit: type=1130 audit(1707770600.074:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.079844 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:43:20.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.081144 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:43:20.085451 kernel: audit: type=1130 audit(1707770600.079:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.091364 dracut-cmdline[209]: dracut-dracut-053 Feb 12 20:43:20.093330 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:43:20.160743 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:43:20.175738 kernel: iscsi: registered transport (tcp) Feb 12 20:43:20.199751 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:43:20.199816 kernel: QLogic iSCSI HBA Driver Feb 12 20:43:20.255358 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:43:20.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.258785 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:43:20.261822 kernel: audit: type=1130 audit(1707770600.255:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.343839 kernel: raid6: sse2x4 gen() 11358 MB/s Feb 12 20:43:20.360995 kernel: raid6: sse2x4 xor() 4985 MB/s Feb 12 20:43:20.377802 kernel: raid6: sse2x2 gen() 14276 MB/s Feb 12 20:43:20.394782 kernel: raid6: sse2x2 xor() 8842 MB/s Feb 12 20:43:20.411800 kernel: raid6: sse2x1 gen() 11116 MB/s Feb 12 20:43:20.429568 kernel: raid6: sse2x1 xor() 6925 MB/s Feb 12 20:43:20.429638 kernel: raid6: using algorithm sse2x2 gen() 14276 MB/s Feb 12 20:43:20.429667 kernel: raid6: .... xor() 8842 MB/s, rmw enabled Feb 12 20:43:20.430442 kernel: raid6: using ssse3x2 recovery algorithm Feb 12 20:43:20.446159 kernel: xor: measuring software checksum speed Feb 12 20:43:20.446218 kernel: prefetch64-sse : 18464 MB/sec Feb 12 20:43:20.446755 kernel: generic_sse : 15697 MB/sec Feb 12 20:43:20.448605 kernel: xor: using function: prefetch64-sse (18464 MB/sec) Feb 12 20:43:20.561079 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:43:20.577537 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:43:20.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.579000 audit: BPF prog-id=7 op=LOAD Feb 12 20:43:20.579000 audit: BPF prog-id=8 op=LOAD Feb 12 20:43:20.581116 systemd[1]: Starting systemd-udevd.service... Feb 12 20:43:20.595194 systemd-udevd[386]: Using default interface naming scheme 'v252'. Feb 12 20:43:20.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.607904 systemd[1]: Started systemd-udevd.service. Feb 12 20:43:20.613034 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:43:20.629968 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 12 20:43:20.680995 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:43:20.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.684181 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:43:20.723808 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:43:20.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:20.795747 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 12 20:43:20.815747 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:43:20.815804 kernel: GPT:17805311 != 41943039 Feb 12 20:43:20.815817 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:43:20.815828 kernel: GPT:17805311 != 41943039 Feb 12 20:43:20.815839 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:43:20.815850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:43:20.839741 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (444) Feb 12 20:43:20.847742 kernel: libata version 3.00 loaded. Feb 12 20:43:20.850017 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:43:20.893758 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:43:20.893989 kernel: scsi host0: ata_piix Feb 12 20:43:20.894125 kernel: scsi host1: ata_piix Feb 12 20:43:20.894234 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 12 20:43:20.894247 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 12 20:43:20.897865 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:43:20.901032 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:43:20.901563 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:43:20.906300 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:43:20.908747 systemd[1]: Starting disk-uuid.service... Feb 12 20:43:20.920209 disk-uuid[462]: Primary Header is updated. Feb 12 20:43:20.920209 disk-uuid[462]: Secondary Entries is updated. Feb 12 20:43:20.920209 disk-uuid[462]: Secondary Header is updated. Feb 12 20:43:20.929745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:43:20.935738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:43:21.945773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:43:21.947468 disk-uuid[463]: The operation has completed successfully. Feb 12 20:43:22.016183 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:43:22.018146 systemd[1]: Finished disk-uuid.service. Feb 12 20:43:22.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.040360 systemd[1]: Starting verity-setup.service... Feb 12 20:43:22.077918 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 12 20:43:22.179233 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:43:22.182444 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:43:22.184118 systemd[1]: Finished verity-setup.service. Feb 12 20:43:22.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.344801 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:43:22.345123 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:43:22.345741 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:43:22.346503 systemd[1]: Starting ignition-setup.service... Feb 12 20:43:22.349101 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:43:22.374072 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:43:22.374133 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:43:22.374145 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:43:22.387933 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:43:22.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.400181 systemd[1]: Finished ignition-setup.service. Feb 12 20:43:22.401562 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:43:22.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.445354 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:43:22.446000 audit: BPF prog-id=9 op=LOAD Feb 12 20:43:22.447366 systemd[1]: Starting systemd-networkd.service... Feb 12 20:43:22.476776 systemd-networkd[633]: lo: Link UP Feb 12 20:43:22.476788 systemd-networkd[633]: lo: Gained carrier Feb 12 20:43:22.477276 systemd-networkd[633]: Enumeration completed Feb 12 20:43:22.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.477489 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:43:22.480227 systemd-networkd[633]: eth0: Link UP Feb 12 20:43:22.480236 systemd-networkd[633]: eth0: Gained carrier Feb 12 20:43:22.481261 systemd[1]: Started systemd-networkd.service. Feb 12 20:43:22.484256 systemd[1]: Reached target network.target. Feb 12 20:43:22.488355 systemd[1]: Starting iscsiuio.service... Feb 12 20:43:22.512832 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.230/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:43:22.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.520851 systemd[1]: Started iscsiuio.service. Feb 12 20:43:22.522185 systemd[1]: Starting iscsid.service... Feb 12 20:43:22.534755 iscsid[638]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:43:22.535750 iscsid[638]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:43:22.535750 iscsid[638]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:43:22.538368 iscsid[638]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:43:22.539161 iscsid[638]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:43:22.540109 iscsid[638]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:43:22.542085 systemd[1]: Started iscsid.service. Feb 12 20:43:22.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.543489 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:43:22.555585 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:43:22.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.556166 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:43:22.556991 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:43:22.557984 systemd[1]: Reached target remote-fs.target. Feb 12 20:43:22.560244 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:43:22.574900 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:43:22.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.706160 ignition[577]: Ignition 2.14.0 Feb 12 20:43:22.706191 ignition[577]: Stage: fetch-offline Feb 12 20:43:22.706358 ignition[577]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:22.706405 ignition[577]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:22.708854 ignition[577]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:22.709147 ignition[577]: parsed url from cmdline: "" Feb 12 20:43:22.709158 ignition[577]: no config URL provided Feb 12 20:43:22.709172 ignition[577]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:43:22.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:22.711083 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:43:22.709192 ignition[577]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:43:22.712395 systemd[1]: Starting ignition-fetch.service... Feb 12 20:43:22.709212 ignition[577]: failed to fetch config: resource requires networking Feb 12 20:43:22.709990 ignition[577]: Ignition finished successfully Feb 12 20:43:22.758472 ignition[656]: Ignition 2.14.0 Feb 12 20:43:22.758500 ignition[656]: Stage: fetch Feb 12 20:43:22.758775 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:22.758821 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:22.761021 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:22.761257 ignition[656]: parsed url from cmdline: "" Feb 12 20:43:22.761266 ignition[656]: no config URL provided Feb 12 20:43:22.761280 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:43:22.761298 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:43:22.766887 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 12 20:43:22.766916 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 12 20:43:22.767515 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 12 20:43:23.110701 ignition[656]: GET result: OK Feb 12 20:43:23.111099 ignition[656]: parsing config with SHA512: 7149a3e15f08ea315b1163e3cffce30b012ab5c532ac9d8fc9b1d426f66ba71980331325210655d9877f4e0f63abe9413b54339e1a6c7537fccc69b53df6a551 Feb 12 20:43:23.189752 unknown[656]: fetched base config from "system" Feb 12 20:43:23.191943 unknown[656]: fetched base config from "system" Feb 12 20:43:23.191975 unknown[656]: fetched user config from "openstack" Feb 12 20:43:23.193872 ignition[656]: fetch: fetch complete Feb 12 20:43:23.193886 ignition[656]: fetch: fetch passed Feb 12 20:43:23.194037 ignition[656]: Ignition finished successfully Feb 12 20:43:23.199449 systemd[1]: Finished ignition-fetch.service. Feb 12 20:43:23.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.202586 systemd[1]: Starting ignition-kargs.service... Feb 12 20:43:23.223414 ignition[662]: Ignition 2.14.0 Feb 12 20:43:23.223442 ignition[662]: Stage: kargs Feb 12 20:43:23.223670 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:23.223754 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:23.226010 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:23.228775 ignition[662]: kargs: kargs passed Feb 12 20:43:23.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.238258 systemd[1]: Finished ignition-kargs.service. Feb 12 20:43:23.228861 ignition[662]: Ignition finished successfully Feb 12 20:43:23.240762 systemd[1]: Starting ignition-disks.service... Feb 12 20:43:23.256458 ignition[667]: Ignition 2.14.0 Feb 12 20:43:23.256486 ignition[667]: Stage: disks Feb 12 20:43:23.256776 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:23.256823 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:23.260254 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:23.263675 ignition[667]: disks: disks passed Feb 12 20:43:23.264608 ignition[667]: Ignition finished successfully Feb 12 20:43:23.266020 systemd[1]: Finished ignition-disks.service. Feb 12 20:43:23.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.266517 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:43:23.266957 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:43:23.267355 systemd[1]: Reached target local-fs.target. Feb 12 20:43:23.269004 systemd[1]: Reached target sysinit.target. Feb 12 20:43:23.270616 systemd[1]: Reached target basic.target. Feb 12 20:43:23.272669 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:43:23.293137 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 20:43:23.302050 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:43:23.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.303414 systemd[1]: Mounting sysroot.mount... Feb 12 20:43:23.320742 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:43:23.321237 systemd[1]: Mounted sysroot.mount. Feb 12 20:43:23.322931 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:43:23.327140 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:43:23.329126 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:43:23.331858 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 12 20:43:23.333165 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:43:23.333234 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:43:23.340887 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:43:23.352292 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:43:23.356958 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:43:23.373752 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 12 20:43:23.376240 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:43:23.386134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:43:23.386182 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:43:23.386203 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:43:23.398782 initrd-setup-root[713]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:43:23.403516 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:43:23.408545 initrd-setup-root[721]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:43:23.414053 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:43:23.505896 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:43:23.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.508406 systemd[1]: Starting ignition-mount.service... Feb 12 20:43:23.510481 systemd[1]: Starting sysroot-boot.service... Feb 12 20:43:23.527063 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:43:23.527194 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:43:23.557180 ignition[750]: INFO : Ignition 2.14.0 Feb 12 20:43:23.558060 ignition[750]: INFO : Stage: mount Feb 12 20:43:23.558634 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:23.559361 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:23.561318 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:23.566104 ignition[750]: INFO : mount: mount passed Feb 12 20:43:23.566916 coreos-metadata[681]: Feb 12 20:43:23.566 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:43:23.568052 ignition[750]: INFO : Ignition finished successfully Feb 12 20:43:23.569626 systemd[1]: Finished ignition-mount.service. Feb 12 20:43:23.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.574701 systemd[1]: Finished sysroot-boot.service. Feb 12 20:43:23.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.590978 coreos-metadata[681]: Feb 12 20:43:23.590 INFO Fetch successful Feb 12 20:43:23.591656 coreos-metadata[681]: Feb 12 20:43:23.591 INFO wrote hostname ci-3510-3-2-8-90b6ad721e.novalocal to /sysroot/etc/hostname Feb 12 20:43:23.595383 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 12 20:43:23.595492 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 12 20:43:23.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:23.597580 systemd[1]: Starting ignition-files.service... Feb 12 20:43:23.605148 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:43:23.615773 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (759) Feb 12 20:43:23.619462 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:43:23.619486 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:43:23.619497 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:43:23.627473 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:43:23.638291 ignition[778]: INFO : Ignition 2.14.0 Feb 12 20:43:23.638291 ignition[778]: INFO : Stage: files Feb 12 20:43:23.639373 ignition[778]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:23.639373 ignition[778]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:23.639373 ignition[778]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:23.641910 ignition[778]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:43:23.642732 ignition[778]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:43:23.642732 ignition[778]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:43:23.645638 ignition[778]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:43:23.646389 ignition[778]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:43:23.647326 unknown[778]: wrote ssh authorized keys file for user: core Feb 12 20:43:23.648049 ignition[778]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:43:23.648782 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:43:23.648782 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:43:23.714952 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:43:23.870201 systemd-networkd[633]: eth0: Gained IPv6LL Feb 12 20:43:24.004823 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:43:24.004823 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:43:24.004823 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:43:24.549900 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:43:25.016488 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:43:25.018360 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:43:25.019420 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:43:25.020447 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:43:25.265077 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:43:26.030689 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:43:26.030689 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:43:26.046186 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:43:26.046186 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:43:26.046186 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:43:26.046186 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:43:26.174053 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:43:27.156752 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 20:43:27.156752 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:43:27.162790 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:43:27.162790 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:43:27.265859 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:43:29.622953 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:43:29.624922 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:43:29.625865 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:43:29.626778 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:43:29.748128 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:43:30.685502 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:43:30.685502 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:43:30.685502 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:43:30.685502 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:43:31.272631 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 20:43:31.750797 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:43:31.750797 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:43:31.755614 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:43:31.755614 ignition[778]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:43:31.755614 ignition[778]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:43:31.755614 ignition[778]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:43:31.755614 ignition[778]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:43:31.755614 ignition[778]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 12 20:43:31.755614 ignition[778]: INFO : files: op(12): op(13): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:43:31.795686 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 12 20:43:31.795725 kernel: audit: type=1130 audit(1707770611.764:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.795745 kernel: audit: type=1130 audit(1707770611.784:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.762350 systemd[1]: Finished ignition-files.service. Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(1b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:43:31.796449 ignition[778]: INFO : files: op(1b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:43:31.853118 kernel: audit: type=1130 audit(1707770611.808:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.853147 kernel: audit: type=1131 audit(1707770611.808:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.853159 kernel: audit: type=1130 audit(1707770611.826:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.853172 kernel: audit: type=1131 audit(1707770611.826:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.766271 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:43:31.853898 ignition[778]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:43:31.853898 ignition[778]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:43:31.853898 ignition[778]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:43:31.853898 ignition[778]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:43:31.853898 ignition[778]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:43:31.853898 ignition[778]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:43:31.853898 ignition[778]: INFO : files: files passed Feb 12 20:43:31.853898 ignition[778]: INFO : Ignition finished successfully Feb 12 20:43:31.868325 kernel: audit: type=1130 audit(1707770611.857:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.778277 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:43:31.869340 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:43:31.780985 systemd[1]: Starting ignition-quench.service... Feb 12 20:43:31.784354 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:43:31.784932 systemd[1]: Reached target ignition-complete.target. Feb 12 20:43:31.800562 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:43:31.807327 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:43:31.807418 systemd[1]: Finished ignition-quench.service. Feb 12 20:43:31.819308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:43:31.819390 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:43:31.881992 kernel: audit: type=1130 audit(1707770611.874:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.882028 kernel: audit: type=1131 audit(1707770611.874:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.827308 systemd[1]: Reached target initrd-fs.target. Feb 12 20:43:31.844458 systemd[1]: Reached target initrd.target. Feb 12 20:43:31.845800 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:43:31.846546 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:43:31.888962 kernel: audit: type=1131 audit(1707770611.884:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.857319 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:43:31.862377 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:43:31.874305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:43:31.874398 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:43:31.875694 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:43:31.882377 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:43:31.883285 systemd[1]: Stopped target timers.target. Feb 12 20:43:31.884154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:43:31.884199 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:43:31.885250 systemd[1]: Stopped target initrd.target. Feb 12 20:43:31.889356 systemd[1]: Stopped target basic.target. Feb 12 20:43:31.890281 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:43:31.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.891184 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:43:31.892069 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:43:31.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.892918 systemd[1]: Stopped target remote-fs.target. Feb 12 20:43:31.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.893788 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:43:31.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.894697 systemd[1]: Stopped target sysinit.target. Feb 12 20:43:31.895514 systemd[1]: Stopped target local-fs.target. Feb 12 20:43:31.896378 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:43:31.904747 iscsid[638]: iscsid shutting down. Feb 12 20:43:31.897201 systemd[1]: Stopped target swap.target. Feb 12 20:43:31.898016 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:43:31.898061 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:43:31.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.898900 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:43:31.899691 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:43:31.899752 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:43:31.922000 ignition[816]: INFO : Ignition 2.14.0 Feb 12 20:43:31.922000 ignition[816]: INFO : Stage: umount Feb 12 20:43:31.922000 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:43:31.922000 ignition[816]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:43:31.922000 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:43:31.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.900610 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:43:31.900649 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:43:31.901455 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:43:31.901491 systemd[1]: Stopped ignition-files.service. Feb 12 20:43:31.903050 systemd[1]: Stopping ignition-mount.service... Feb 12 20:43:31.906042 systemd[1]: Stopping iscsid.service... Feb 12 20:43:31.930357 ignition[816]: INFO : umount: umount passed Feb 12 20:43:31.930357 ignition[816]: INFO : Ignition finished successfully Feb 12 20:43:31.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.907690 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:43:31.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.908149 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:43:31.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.908203 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:43:31.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.908688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:43:31.908747 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:43:31.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.909538 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:43:31.909631 systemd[1]: Stopped iscsid.service. Feb 12 20:43:31.911358 systemd[1]: Stopping iscsiuio.service... Feb 12 20:43:31.919394 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:43:31.919497 systemd[1]: Stopped iscsiuio.service. Feb 12 20:43:31.930511 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:43:31.930610 systemd[1]: Stopped ignition-mount.service. Feb 12 20:43:31.931398 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:43:31.931441 systemd[1]: Stopped ignition-disks.service. Feb 12 20:43:31.932270 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:43:31.932307 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:43:31.933140 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:43:31.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.933177 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:43:31.934274 systemd[1]: Stopped target network.target. Feb 12 20:43:31.935254 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:43:31.935300 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:43:31.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.936336 systemd[1]: Stopped target paths.target. Feb 12 20:43:31.937194 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:43:31.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.939985 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:43:31.940663 systemd[1]: Stopped target slices.target. Feb 12 20:43:31.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.957000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:43:31.941543 systemd[1]: Stopped target sockets.target. Feb 12 20:43:31.942538 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:43:31.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.942583 systemd[1]: Closed iscsid.socket. Feb 12 20:43:31.943369 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:43:31.943402 systemd[1]: Closed iscsiuio.socket. Feb 12 20:43:31.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.944238 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:43:31.944283 systemd[1]: Stopped ignition-setup.service. Feb 12 20:43:31.945330 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:43:31.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.946980 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:43:31.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.947754 systemd-networkd[633]: eth0: DHCPv6 lease lost Feb 12 20:43:31.965000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:43:31.951182 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:43:31.951641 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:43:31.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.952241 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:43:31.954417 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:43:31.954536 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:43:31.956929 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:43:31.957027 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:43:31.958022 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:43:31.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.958069 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:43:31.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.958626 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:43:31.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.958665 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:43:31.960347 systemd[1]: Stopping network-cleanup.service... Feb 12 20:43:31.961213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:43:31.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.961352 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:43:31.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.962201 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:43:31.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.962251 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:43:31.964482 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:43:31.964532 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:43:31.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.965229 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:43:31.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:31.967434 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:43:31.967990 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:43:31.968156 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:43:31.969514 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:43:31.969562 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:43:31.971540 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:43:31.971570 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:43:31.972417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:43:31.972484 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:43:31.973334 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:43:31.973371 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:43:31.974355 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:43:31.974394 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:43:31.979610 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:43:31.986267 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:43:31.986319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:43:31.987634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:43:31.987671 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:43:31.988308 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:43:31.988347 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:43:31.990031 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:43:31.990478 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:43:31.990556 systemd[1]: Stopped network-cleanup.service. Feb 12 20:43:31.991555 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:43:31.991626 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:43:31.992393 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:43:31.993834 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:43:32.013014 systemd[1]: Switching root. Feb 12 20:43:32.031934 systemd-journald[186]: Journal stopped Feb 12 20:43:36.377545 systemd-journald[186]: Received SIGTERM from PID 1 (n/a). Feb 12 20:43:36.377600 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:43:36.377614 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:43:36.377626 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:43:36.377640 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:43:36.377650 kernel: SELinux: policy capability open_perms=1 Feb 12 20:43:36.377661 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:43:36.377672 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:43:36.377683 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:43:36.377697 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:43:36.381193 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:43:36.381222 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:43:36.381236 systemd[1]: Successfully loaded SELinux policy in 93.663ms. Feb 12 20:43:36.381257 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.600ms. Feb 12 20:43:36.381271 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:43:36.381283 systemd[1]: Detected virtualization kvm. Feb 12 20:43:36.381295 systemd[1]: Detected architecture x86-64. Feb 12 20:43:36.381306 systemd[1]: Detected first boot. Feb 12 20:43:36.381321 systemd[1]: Hostname set to . Feb 12 20:43:36.381333 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:43:36.381344 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:43:36.381356 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:43:36.381369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:43:36.381382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:43:36.381396 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:43:36.381410 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:43:36.381422 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:43:36.381434 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:43:36.381449 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:43:36.381462 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:43:36.381474 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:43:36.381486 systemd[1]: Created slice system-getty.slice. Feb 12 20:43:36.381500 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:43:36.381540 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:43:36.381553 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:43:36.381565 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:43:36.381577 systemd[1]: Created slice user.slice. Feb 12 20:43:36.381588 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:43:36.381599 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:43:36.381611 systemd[1]: Set up automount boot.automount. Feb 12 20:43:36.381626 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:43:36.381638 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:43:36.381649 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:43:36.381660 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:43:36.381675 systemd[1]: Reached target integritysetup.target. Feb 12 20:43:36.381686 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:43:36.381697 systemd[1]: Reached target remote-fs.target. Feb 12 20:43:36.385525 systemd[1]: Reached target slices.target. Feb 12 20:43:36.385554 systemd[1]: Reached target swap.target. Feb 12 20:43:36.385567 systemd[1]: Reached target torcx.target. Feb 12 20:43:36.385579 systemd[1]: Reached target veritysetup.target. Feb 12 20:43:36.385591 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:43:36.385603 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:43:36.385615 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:43:36.385626 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:43:36.385638 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:43:36.385650 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:43:36.385666 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:43:36.385679 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:43:36.385690 systemd[1]: Mounting media.mount... Feb 12 20:43:36.385702 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:43:36.388691 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:43:36.388739 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:43:36.388753 systemd[1]: Mounting tmp.mount... Feb 12 20:43:36.388764 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:43:36.388776 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:43:36.388792 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:43:36.388804 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:43:36.388816 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:43:36.388829 systemd[1]: Starting modprobe@drm.service... Feb 12 20:43:36.388840 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:43:36.388852 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:43:36.388863 systemd[1]: Starting modprobe@loop.service... Feb 12 20:43:36.388879 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:43:36.388890 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:43:36.388904 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:43:36.388916 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:43:36.388927 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:43:36.388938 systemd[1]: Stopped systemd-journald.service. Feb 12 20:43:36.388949 systemd[1]: Starting systemd-journald.service... Feb 12 20:43:36.388960 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:43:36.388972 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:43:36.388984 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:43:36.388995 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:43:36.389008 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:43:36.389020 systemd[1]: Stopped verity-setup.service. Feb 12 20:43:36.389031 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:43:36.389042 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:43:36.389054 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:43:36.389065 systemd[1]: Mounted media.mount. Feb 12 20:43:36.389076 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:43:36.389087 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:43:36.389098 systemd[1]: Mounted tmp.mount. Feb 12 20:43:36.389112 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:43:36.389123 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:43:36.389135 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:43:36.389147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:43:36.389158 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:43:36.389169 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:43:36.389182 systemd[1]: Finished modprobe@drm.service. Feb 12 20:43:36.389193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:43:36.389205 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:43:36.389216 kernel: fuse: init (API version 7.34) Feb 12 20:43:36.389229 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:43:36.389240 kernel: loop: module loaded Feb 12 20:43:36.389251 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:43:36.389262 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:43:36.389275 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:43:36.389286 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:43:36.389298 systemd[1]: Finished modprobe@loop.service. Feb 12 20:43:36.389309 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:43:36.389321 systemd[1]: Reached target network-pre.target. Feb 12 20:43:36.389333 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:43:36.389344 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:43:36.389355 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:43:36.389372 systemd-journald[915]: Journal started Feb 12 20:43:36.389432 systemd-journald[915]: Runtime Journal (/run/log/journal/57c1ddd9f9e24799bf61e4aa041365fc) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:43:32.311000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:43:32.428000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:43:32.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:43:32.428000 audit: BPF prog-id=10 op=LOAD Feb 12 20:43:32.428000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:43:32.428000 audit: BPF prog-id=11 op=LOAD Feb 12 20:43:32.428000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:43:32.607000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:43:32.607000 audit[849]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cede0 a2=c0000d7ac0 a3=32 items=0 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:43:32.607000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:43:32.610000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:43:32.610000 audit[849]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:43:32.610000 audit: CWD cwd="/" Feb 12 20:43:32.610000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:32.610000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:32.610000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:43:36.137000 audit: BPF prog-id=12 op=LOAD Feb 12 20:43:36.137000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:43:36.138000 audit: BPF prog-id=13 op=LOAD Feb 12 20:43:36.138000 audit: BPF prog-id=14 op=LOAD Feb 12 20:43:36.138000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:43:36.138000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:43:36.138000 audit: BPF prog-id=15 op=LOAD Feb 12 20:43:36.138000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:43:36.139000 audit: BPF prog-id=16 op=LOAD Feb 12 20:43:36.139000 audit: BPF prog-id=17 op=LOAD Feb 12 20:43:36.139000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:43:36.139000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:43:36.140000 audit: BPF prog-id=18 op=LOAD Feb 12 20:43:36.140000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:43:36.140000 audit: BPF prog-id=19 op=LOAD Feb 12 20:43:36.140000 audit: BPF prog-id=20 op=LOAD Feb 12 20:43:36.140000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:43:36.140000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:43:36.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.151000 audit: BPF prog-id=18 op=UNLOAD Feb 12 20:43:36.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.280000 audit: BPF prog-id=21 op=LOAD Feb 12 20:43:36.280000 audit: BPF prog-id=22 op=LOAD Feb 12 20:43:36.280000 audit: BPF prog-id=23 op=LOAD Feb 12 20:43:36.280000 audit: BPF prog-id=19 op=UNLOAD Feb 12 20:43:36.280000 audit: BPF prog-id=20 op=UNLOAD Feb 12 20:43:36.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.375000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:43:36.375000 audit[915]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd754a78e0 a2=4000 a3=7ffd754a797c items=0 ppid=1 pid=915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:43:36.375000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:43:36.135963 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:43:32.603393 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:43:36.135978 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:43:32.604582 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:43:36.142176 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:43:32.604607 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:43:32.604646 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:43:32.604659 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:43:32.604695 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:43:32.604730 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:43:32.604990 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:43:32.605036 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:43:32.605051 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:43:32.606213 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:43:32.606254 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:43:32.606276 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:43:32.606293 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:43:32.606313 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:43:36.394780 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:43:32.606329 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:43:35.669825 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:43:35.670111 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:43:35.670238 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:43:35.670436 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:43:35.670499 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:43:35.670576 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-02-12T20:43:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:43:36.400766 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:43:36.400820 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:43:36.400840 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:43:36.405728 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:43:36.410737 systemd[1]: Started systemd-journald.service. Feb 12 20:43:36.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.412469 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:43:36.413071 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:43:36.421824 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:43:36.431772 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:43:36.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.432385 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:43:36.436982 systemd-journald[915]: Time spent on flushing to /var/log/journal/57c1ddd9f9e24799bf61e4aa041365fc is 29.449ms for 1147 entries. Feb 12 20:43:36.436982 systemd-journald[915]: System Journal (/var/log/journal/57c1ddd9f9e24799bf61e4aa041365fc) is 8.0M, max 584.8M, 576.8M free. Feb 12 20:43:36.497944 systemd-journald[915]: Received client request to flush runtime journal. Feb 12 20:43:36.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.442533 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:43:36.498308 udevadm[958]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:43:36.451156 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:43:36.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.458743 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:43:36.466360 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:43:36.468523 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:43:36.499083 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:43:36.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.513676 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:43:36.515579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:43:36.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:36.556035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:43:37.119895 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:43:37.132827 kernel: kauditd_printk_skb: 108 callbacks suppressed Feb 12 20:43:37.133692 kernel: audit: type=1130 audit(1707770617.120:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.132000 audit: BPF prog-id=24 op=LOAD Feb 12 20:43:37.136107 systemd[1]: Starting systemd-udevd.service... Feb 12 20:43:37.137766 kernel: audit: type=1334 audit(1707770617.132:148): prog-id=24 op=LOAD Feb 12 20:43:37.132000 audit: BPF prog-id=25 op=LOAD Feb 12 20:43:37.133000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:43:37.133000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:43:37.141013 kernel: audit: type=1334 audit(1707770617.132:149): prog-id=25 op=LOAD Feb 12 20:43:37.141092 kernel: audit: type=1334 audit(1707770617.133:150): prog-id=7 op=UNLOAD Feb 12 20:43:37.141155 kernel: audit: type=1334 audit(1707770617.133:151): prog-id=8 op=UNLOAD Feb 12 20:43:37.181807 systemd-udevd[964]: Using default interface naming scheme 'v252'. Feb 12 20:43:37.232289 systemd[1]: Started systemd-udevd.service. Feb 12 20:43:37.253781 kernel: audit: type=1130 audit(1707770617.236:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.269102 kernel: audit: type=1334 audit(1707770617.256:153): prog-id=26 op=LOAD Feb 12 20:43:37.256000 audit: BPF prog-id=26 op=LOAD Feb 12 20:43:37.267959 systemd[1]: Starting systemd-networkd.service... Feb 12 20:43:37.288154 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:43:37.412818 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:43:37.419758 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:43:37.433000 audit[967]: AVC avc: denied { confidentiality } for pid=967 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:43:37.451789 kernel: audit: type=1400 audit(1707770617.433:154): avc: denied { confidentiality } for pid=967 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:43:37.433000 audit[967]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55691fbe84f0 a1=32194 a2=7f2af8e3abc5 a3=5 items=108 ppid=964 pid=967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:43:37.433000 audit: CWD cwd="/" Feb 12 20:43:37.459742 kernel: audit: type=1300 audit(1707770617.433:154): arch=c000003e syscall=175 success=yes exit=0 a0=55691fbe84f0 a1=32194 a2=7f2af8e3abc5 a3=5 items=108 ppid=964 pid=967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:43:37.459807 kernel: audit: type=1307 audit(1707770617.433:154): cwd="/" Feb 12 20:43:37.433000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=1 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=2 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=3 name=(null) inode=14168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=4 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=5 name=(null) inode=14169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=6 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=7 name=(null) inode=14170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=8 name=(null) inode=14170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=9 name=(null) inode=14171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=10 name=(null) inode=14170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=11 name=(null) inode=14172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=12 name=(null) inode=14170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=13 name=(null) inode=14173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=14 name=(null) inode=14170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=15 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=16 name=(null) inode=14170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=17 name=(null) inode=14175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=18 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=19 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=20 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=21 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=22 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=23 name=(null) inode=14178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=24 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=25 name=(null) inode=14179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=26 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=27 name=(null) inode=14180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=28 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=29 name=(null) inode=14181 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=30 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=31 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=32 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=33 name=(null) inode=14183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=34 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=35 name=(null) inode=14184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=36 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=37 name=(null) inode=14185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=38 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=39 name=(null) inode=14186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=40 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=41 name=(null) inode=14187 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=42 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=43 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=44 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=45 name=(null) inode=14189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=46 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=47 name=(null) inode=14190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=48 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=49 name=(null) inode=14191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=50 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=51 name=(null) inode=14192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=52 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=53 name=(null) inode=14193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=55 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=56 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=57 name=(null) inode=14195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=58 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=59 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=60 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=61 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=62 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=63 name=(null) inode=14198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=64 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=65 name=(null) inode=14199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=66 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=67 name=(null) inode=14200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=68 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=69 name=(null) inode=14201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=70 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=71 name=(null) inode=14202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=72 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=73 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=74 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=75 name=(null) inode=14204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=76 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=77 name=(null) inode=14205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=78 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=79 name=(null) inode=14206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=80 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=81 name=(null) inode=14207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=82 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=83 name=(null) inode=14208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=84 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=85 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=86 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=87 name=(null) inode=14210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=88 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=89 name=(null) inode=14211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=90 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=91 name=(null) inode=14212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=92 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=93 name=(null) inode=14213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=94 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=95 name=(null) inode=14214 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=96 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=97 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=98 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=99 name=(null) inode=14216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=100 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=101 name=(null) inode=14217 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=102 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=103 name=(null) inode=14218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=104 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=105 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=106 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PATH item=107 name=(null) inode=14220 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:43:37.433000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:43:37.642187 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:43:37.652000 audit: BPF prog-id=27 op=LOAD Feb 12 20:43:37.653000 audit: BPF prog-id=28 op=LOAD Feb 12 20:43:37.655000 audit: BPF prog-id=29 op=LOAD Feb 12 20:43:37.658609 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:43:37.667798 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:43:37.689804 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:43:37.728853 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:43:37.807043 systemd[1]: Started systemd-userdbd.service. Feb 12 20:43:37.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.898043 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:43:37.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.905128 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:43:37.958276 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:43:37.984590 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:43:37.985272 systemd[1]: Reached target cryptsetup.target. Feb 12 20:43:37.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:37.987136 systemd[1]: Starting lvm2-activation.service... Feb 12 20:43:37.992885 lvm[994]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:43:38.006438 systemd-networkd[980]: lo: Link UP Feb 12 20:43:38.007007 systemd-networkd[980]: lo: Gained carrier Feb 12 20:43:38.008232 systemd-networkd[980]: Enumeration completed Feb 12 20:43:38.008563 systemd[1]: Started systemd-networkd.service. Feb 12 20:43:38.008892 systemd-networkd[980]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:43:38.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.012656 systemd-networkd[980]: eth0: Link UP Feb 12 20:43:38.012898 systemd-networkd[980]: eth0: Gained carrier Feb 12 20:43:38.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.013827 systemd[1]: Finished lvm2-activation.service. Feb 12 20:43:38.014390 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:43:38.014850 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:43:38.014873 systemd[1]: Reached target local-fs.target. Feb 12 20:43:38.015300 systemd[1]: Reached target machines.target. Feb 12 20:43:38.017199 systemd[1]: Starting ldconfig.service... Feb 12 20:43:38.018945 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:43:38.018999 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:43:38.020182 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:43:38.023044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:43:38.025667 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:43:38.026501 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:43:38.026557 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:43:38.028061 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:43:38.048558 systemd[1]: boot.automount: Got automount request for /boot, triggered by 996 (bootctl) Feb 12 20:43:38.050043 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:43:38.050840 systemd-networkd[980]: eth0: DHCPv4 address 172.24.4.230/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:43:38.084870 systemd-tmpfiles[999]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:43:38.117393 systemd-tmpfiles[999]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:43:38.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.122948 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:43:38.158836 systemd-tmpfiles[999]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:43:38.310878 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:43:38.312437 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:43:38.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.472456 systemd-fsck[1004]: fsck.fat 4.2 (2021-01-31) Feb 12 20:43:38.472456 systemd-fsck[1004]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:43:38.476918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:43:38.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.481536 systemd[1]: Mounting boot.mount... Feb 12 20:43:38.504682 systemd[1]: Mounted boot.mount. Feb 12 20:43:38.535939 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:43:38.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.649248 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:43:38.651225 systemd[1]: Starting audit-rules.service... Feb 12 20:43:38.652629 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:43:38.654613 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:43:38.665000 audit: BPF prog-id=30 op=LOAD Feb 12 20:43:38.670298 systemd[1]: Starting systemd-resolved.service... Feb 12 20:43:38.671000 audit: BPF prog-id=31 op=LOAD Feb 12 20:43:38.675693 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:43:38.677689 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:43:38.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.685603 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:43:38.686282 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:43:38.702000 audit[1013]: SYSTEM_BOOT pid=1013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.707665 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:43:38.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:43:38.726365 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:43:38.760000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:43:38.760000 audit[1027]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc932e2930 a2=420 a3=0 items=0 ppid=1007 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:43:38.760000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:43:38.761614 augenrules[1027]: No rules Feb 12 20:43:38.762529 systemd[1]: Finished audit-rules.service. Feb 12 20:43:38.777301 systemd-resolved[1010]: Positive Trust Anchors: Feb 12 20:43:38.777317 systemd-resolved[1010]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:43:38.777355 systemd-resolved[1010]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:43:38.786103 systemd-resolved[1010]: Using system hostname 'ci-3510-3-2-8-90b6ad721e.novalocal'. Feb 12 20:43:38.788408 systemd[1]: Started systemd-resolved.service. Feb 12 20:43:38.789008 systemd[1]: Reached target network.target. Feb 12 20:43:38.789417 systemd[1]: Reached target nss-lookup.target. Feb 12 20:43:38.790976 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:43:38.791493 systemd[1]: Reached target time-set.target. Feb 12 20:43:38.828332 systemd-timesyncd[1012]: Contacted time server 51.158.147.92:123 (0.flatcar.pool.ntp.org). Feb 12 20:43:38.828827 systemd-timesyncd[1012]: Initial clock synchronization to Mon 2024-02-12 20:43:39.091152 UTC. Feb 12 20:43:39.052128 ldconfig[995]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:43:39.065253 systemd[1]: Finished ldconfig.service. Feb 12 20:43:39.068084 systemd[1]: Starting systemd-update-done.service... Feb 12 20:43:39.083898 systemd[1]: Finished systemd-update-done.service. Feb 12 20:43:39.085405 systemd[1]: Reached target sysinit.target. Feb 12 20:43:39.086674 systemd[1]: Started motdgen.path. Feb 12 20:43:39.087837 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:43:39.089400 systemd[1]: Started logrotate.timer. Feb 12 20:43:39.090800 systemd[1]: Started mdadm.timer. Feb 12 20:43:39.092080 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:43:39.093234 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:43:39.093313 systemd[1]: Reached target paths.target. Feb 12 20:43:39.094391 systemd[1]: Reached target timers.target. Feb 12 20:43:39.096432 systemd[1]: Listening on dbus.socket. Feb 12 20:43:39.100150 systemd[1]: Starting docker.socket... Feb 12 20:43:39.101911 systemd-networkd[980]: eth0: Gained IPv6LL Feb 12 20:43:39.108559 systemd[1]: Listening on sshd.socket. Feb 12 20:43:39.110232 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:43:39.111451 systemd[1]: Listening on docker.socket. Feb 12 20:43:39.118719 systemd[1]: Created slice system-sshd.slice. Feb 12 20:43:39.120135 systemd[1]: Reached target sockets.target. Feb 12 20:43:39.121477 systemd[1]: Reached target basic.target. Feb 12 20:43:39.122888 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:43:39.123148 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:43:39.125653 systemd[1]: Starting containerd.service... Feb 12 20:43:39.129929 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:43:39.135338 systemd[1]: Starting dbus.service... Feb 12 20:43:39.139260 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:43:39.143827 systemd[1]: Starting extend-filesystems.service... Feb 12 20:43:39.145346 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:43:39.148613 systemd[1]: Starting motdgen.service... Feb 12 20:43:39.150626 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:43:39.155051 systemd[1]: Starting prepare-critools.service... Feb 12 20:43:39.156610 systemd[1]: Starting prepare-helm.service... Feb 12 20:43:39.158890 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:43:39.160632 systemd[1]: Starting sshd-keygen.service... Feb 12 20:43:39.166185 systemd[1]: Starting systemd-logind.service... Feb 12 20:43:39.166719 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:43:39.166824 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:43:39.167319 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:43:39.169633 systemd[1]: Starting update-engine.service... Feb 12 20:43:39.193844 jq[1052]: true Feb 12 20:43:39.172527 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:43:39.211273 tar[1055]: ./ Feb 12 20:43:39.211273 tar[1055]: ./macvlan Feb 12 20:43:39.211592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:43:39.211805 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:43:39.212644 jq[1042]: false Feb 12 20:43:39.213459 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:43:39.213624 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:43:39.215371 tar[1057]: linux-amd64/helm Feb 12 20:43:39.238708 jq[1058]: true Feb 12 20:43:39.244395 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:43:39.244674 systemd[1]: Finished motdgen.service. Feb 12 20:43:39.258524 tar[1056]: crictl Feb 12 20:43:39.268812 extend-filesystems[1043]: Found vda Feb 12 20:43:39.271117 extend-filesystems[1043]: Found vda1 Feb 12 20:43:39.274183 dbus-daemon[1041]: [system] SELinux support is enabled Feb 12 20:43:39.274419 systemd[1]: Started dbus.service. Feb 12 20:43:39.277177 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:43:39.277212 systemd[1]: Reached target system-config.target. Feb 12 20:43:39.277726 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:43:39.277772 systemd[1]: Reached target user-config.target. Feb 12 20:43:39.280846 extend-filesystems[1043]: Found vda2 Feb 12 20:43:39.288801 extend-filesystems[1043]: Found vda3 Feb 12 20:43:39.291852 extend-filesystems[1043]: Found usr Feb 12 20:43:39.292699 extend-filesystems[1043]: Found vda4 Feb 12 20:43:39.292699 extend-filesystems[1043]: Found vda6 Feb 12 20:43:39.292699 extend-filesystems[1043]: Found vda7 Feb 12 20:43:39.292699 extend-filesystems[1043]: Found vda9 Feb 12 20:43:39.292699 extend-filesystems[1043]: Checking size of /dev/vda9 Feb 12 20:43:39.332909 extend-filesystems[1043]: Resized partition /dev/vda9 Feb 12 20:43:39.359876 bash[1092]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:43:39.360226 extend-filesystems[1098]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:43:39.362606 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:43:39.383527 env[1059]: time="2024-02-12T20:43:39.383427985Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:43:39.400399 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 12 20:43:39.404195 coreos-metadata[1038]: Feb 12 20:43:39.396 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 12 20:43:39.410228 update_engine[1051]: I0212 20:43:39.408974 1051 main.cc:92] Flatcar Update Engine starting Feb 12 20:43:39.416005 systemd[1]: Started update-engine.service. Feb 12 20:43:39.416241 update_engine[1051]: I0212 20:43:39.415997 1051 update_check_scheduler.cc:74] Next update check in 9m27s Feb 12 20:43:39.419283 systemd[1]: Started locksmithd.service. Feb 12 20:43:39.425564 systemd-logind[1050]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:43:39.426120 systemd-logind[1050]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:43:39.426404 systemd-logind[1050]: New seat seat0. Feb 12 20:43:39.428483 systemd[1]: Started systemd-logind.service. Feb 12 20:43:39.446347 tar[1055]: ./static Feb 12 20:43:39.492175 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 12 20:43:39.542334 env[1059]: time="2024-02-12T20:43:39.524154176Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:43:39.547375 extend-filesystems[1098]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:43:39.547375 extend-filesystems[1098]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 12 20:43:39.547375 extend-filesystems[1098]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 12 20:43:39.550602 extend-filesystems[1043]: Resized filesystem in /dev/vda9 Feb 12 20:43:39.547779 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:43:39.548002 systemd[1]: Finished extend-filesystems.service. Feb 12 20:43:39.553162 env[1059]: time="2024-02-12T20:43:39.553060022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:43:39.555327 env[1059]: time="2024-02-12T20:43:39.555267121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:43:39.555399 env[1059]: time="2024-02-12T20:43:39.555330564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:43:39.555715 env[1059]: time="2024-02-12T20:43:39.555661485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:43:39.555789 env[1059]: time="2024-02-12T20:43:39.555729585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:43:39.555789 env[1059]: time="2024-02-12T20:43:39.555774693Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:43:39.555849 env[1059]: time="2024-02-12T20:43:39.555794054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:43:39.555984 env[1059]: time="2024-02-12T20:43:39.555952959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:43:39.556419 env[1059]: time="2024-02-12T20:43:39.556390103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:43:39.556636 env[1059]: time="2024-02-12T20:43:39.556584108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:43:39.556675 env[1059]: time="2024-02-12T20:43:39.556633562Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:43:39.556854 env[1059]: time="2024-02-12T20:43:39.556825249Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:43:39.556854 env[1059]: time="2024-02-12T20:43:39.556850322Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:43:39.576456 env[1059]: time="2024-02-12T20:43:39.576391137Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:43:39.576542 env[1059]: time="2024-02-12T20:43:39.576469389Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:43:39.576542 env[1059]: time="2024-02-12T20:43:39.576498684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:43:39.576620 env[1059]: time="2024-02-12T20:43:39.576571545Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576620 env[1059]: time="2024-02-12T20:43:39.576594725Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576678 env[1059]: time="2024-02-12T20:43:39.576634451Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576678 env[1059]: time="2024-02-12T20:43:39.576659286Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576729 env[1059]: time="2024-02-12T20:43:39.576678430Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576781 env[1059]: time="2024-02-12T20:43:39.576727025Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576781 env[1059]: time="2024-02-12T20:43:39.576749242Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576843 env[1059]: time="2024-02-12T20:43:39.576785854Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.576843 env[1059]: time="2024-02-12T20:43:39.576826025Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:43:39.577101 env[1059]: time="2024-02-12T20:43:39.577052274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:43:39.577226 env[1059]: time="2024-02-12T20:43:39.577200045Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577638369Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577701616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577735123Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577806628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577825689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577842391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577858451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577873777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577889982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577904263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577918729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.577937438Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.578117991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.578139070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.582964 env[1059]: time="2024-02-12T20:43:39.578155213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.583359 env[1059]: time="2024-02-12T20:43:39.578169380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:43:39.583359 env[1059]: time="2024-02-12T20:43:39.578188865Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:43:39.583359 env[1059]: time="2024-02-12T20:43:39.578203559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:43:39.583359 env[1059]: time="2024-02-12T20:43:39.578226708Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:43:39.583359 env[1059]: time="2024-02-12T20:43:39.578268131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:43:39.583491 env[1059]: time="2024-02-12T20:43:39.578513846Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:43:39.583491 env[1059]: time="2024-02-12T20:43:39.578585144Z" level=info msg="Connect containerd service" Feb 12 20:43:39.583491 env[1059]: time="2024-02-12T20:43:39.578627551Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:43:39.588632 env[1059]: time="2024-02-12T20:43:39.586036943Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:43:39.591726 env[1059]: time="2024-02-12T20:43:39.591689750Z" level=info msg="Start subscribing containerd event" Feb 12 20:43:39.591861 env[1059]: time="2024-02-12T20:43:39.591843098Z" level=info msg="Start recovering state" Feb 12 20:43:39.591964 env[1059]: time="2024-02-12T20:43:39.591906004Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:43:39.592043 env[1059]: time="2024-02-12T20:43:39.592027988Z" level=info msg="Start event monitor" Feb 12 20:43:39.592133 env[1059]: time="2024-02-12T20:43:39.592083380Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:43:39.592235 env[1059]: time="2024-02-12T20:43:39.592102545Z" level=info msg="Start snapshots syncer" Feb 12 20:43:39.592300 env[1059]: time="2024-02-12T20:43:39.592285964Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:43:39.592352 systemd[1]: Started containerd.service. Feb 12 20:43:39.592466 env[1059]: time="2024-02-12T20:43:39.592445615Z" level=info msg="Start streaming server" Feb 12 20:43:39.593415 env[1059]: time="2024-02-12T20:43:39.593239105Z" level=info msg="containerd successfully booted in 0.216460s" Feb 12 20:43:39.599167 tar[1055]: ./vlan Feb 12 20:43:39.642125 tar[1055]: ./portmap Feb 12 20:43:39.682341 tar[1055]: ./host-local Feb 12 20:43:39.718591 tar[1055]: ./vrf Feb 12 20:43:39.757009 tar[1055]: ./bridge Feb 12 20:43:39.777573 coreos-metadata[1038]: Feb 12 20:43:39.777 INFO Fetch successful Feb 12 20:43:39.777573 coreos-metadata[1038]: Feb 12 20:43:39.777 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:43:39.794715 coreos-metadata[1038]: Feb 12 20:43:39.794 INFO Fetch successful Feb 12 20:43:39.799179 unknown[1038]: wrote ssh authorized keys file for user: core Feb 12 20:43:39.827339 update-ssh-keys[1105]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:43:39.827810 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:43:39.833628 tar[1055]: ./tuning Feb 12 20:43:39.915183 tar[1055]: ./firewall Feb 12 20:43:40.001437 tar[1055]: ./host-device Feb 12 20:43:40.077597 tar[1055]: ./sbr Feb 12 20:43:40.145567 tar[1055]: ./loopback Feb 12 20:43:40.209116 tar[1055]: ./dhcp Feb 12 20:43:40.400488 tar[1055]: ./ptp Feb 12 20:43:40.501347 tar[1055]: ./ipvlan Feb 12 20:43:40.521689 tar[1057]: linux-amd64/LICENSE Feb 12 20:43:40.522479 systemd[1]: Finished prepare-critools.service. Feb 12 20:43:40.523457 tar[1057]: linux-amd64/README.md Feb 12 20:43:40.529648 systemd[1]: Finished prepare-helm.service. Feb 12 20:43:40.551600 tar[1055]: ./bandwidth Feb 12 20:43:40.637601 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:43:40.810349 locksmithd[1101]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:43:41.009484 sshd_keygen[1082]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:43:41.037845 systemd[1]: Finished sshd-keygen.service. Feb 12 20:43:41.042361 systemd[1]: Starting issuegen.service... Feb 12 20:43:41.045451 systemd[1]: Started sshd@0-172.24.4.230:22-172.24.4.1:36424.service. Feb 12 20:43:41.049166 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:43:41.049491 systemd[1]: Finished issuegen.service. Feb 12 20:43:41.055019 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:43:41.069096 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:43:41.073322 systemd[1]: Started getty@tty1.service. Feb 12 20:43:41.077543 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:43:41.079273 systemd[1]: Reached target getty.target. Feb 12 20:43:41.080564 systemd[1]: Reached target multi-user.target. Feb 12 20:43:41.086785 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:43:41.096481 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:43:41.096661 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:43:41.097289 systemd[1]: Startup finished in 953ms (kernel) + 12.430s (initrd) + 8.905s (userspace) = 22.289s. Feb 12 20:43:42.674841 sshd[1122]: Accepted publickey for core from 172.24.4.1 port 36424 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:43:42.680086 sshd[1122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:43:42.704149 systemd[1]: Created slice user-500.slice. Feb 12 20:43:42.708345 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:43:42.714522 systemd-logind[1050]: New session 1 of user core. Feb 12 20:43:42.732100 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:43:42.737548 systemd[1]: Starting user@500.service... Feb 12 20:43:42.744834 (systemd)[1131]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:43:42.879251 systemd[1131]: Queued start job for default target default.target. Feb 12 20:43:42.879840 systemd[1131]: Reached target paths.target. Feb 12 20:43:42.879861 systemd[1131]: Reached target sockets.target. Feb 12 20:43:42.879875 systemd[1131]: Reached target timers.target. Feb 12 20:43:42.879889 systemd[1131]: Reached target basic.target. Feb 12 20:43:42.880001 systemd[1]: Started user@500.service. Feb 12 20:43:42.880998 systemd[1]: Started session-1.scope. Feb 12 20:43:42.881427 systemd[1131]: Reached target default.target. Feb 12 20:43:42.881628 systemd[1131]: Startup finished in 123ms. Feb 12 20:43:43.400908 systemd[1]: Started sshd@1-172.24.4.230:22-172.24.4.1:36432.service. Feb 12 20:43:44.944111 sshd[1140]: Accepted publickey for core from 172.24.4.1 port 36432 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:43:44.947869 sshd[1140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:43:44.959158 systemd-logind[1050]: New session 2 of user core. Feb 12 20:43:44.959951 systemd[1]: Started session-2.scope. Feb 12 20:43:45.772831 sshd[1140]: pam_unix(sshd:session): session closed for user core Feb 12 20:43:45.782904 systemd[1]: Started sshd@2-172.24.4.230:22-172.24.4.1:55240.service. Feb 12 20:43:45.784189 systemd[1]: sshd@1-172.24.4.230:22-172.24.4.1:36432.service: Deactivated successfully. Feb 12 20:43:45.785851 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:43:45.789544 systemd-logind[1050]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:43:45.792460 systemd-logind[1050]: Removed session 2. Feb 12 20:43:47.040531 sshd[1145]: Accepted publickey for core from 172.24.4.1 port 55240 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:43:47.043855 sshd[1145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:43:47.053868 systemd-logind[1050]: New session 3 of user core. Feb 12 20:43:47.054122 systemd[1]: Started session-3.scope. Feb 12 20:43:47.690525 sshd[1145]: pam_unix(sshd:session): session closed for user core Feb 12 20:43:47.698253 systemd[1]: Started sshd@3-172.24.4.230:22-172.24.4.1:55246.service. Feb 12 20:43:47.703142 systemd[1]: sshd@2-172.24.4.230:22-172.24.4.1:55240.service: Deactivated successfully. Feb 12 20:43:47.704618 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:43:47.707693 systemd-logind[1050]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:43:47.710475 systemd-logind[1050]: Removed session 3. Feb 12 20:43:49.118209 sshd[1151]: Accepted publickey for core from 172.24.4.1 port 55246 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:43:49.119292 sshd[1151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:43:49.128824 systemd[1]: Started session-4.scope. Feb 12 20:43:49.129712 systemd-logind[1050]: New session 4 of user core. Feb 12 20:43:49.769235 sshd[1151]: pam_unix(sshd:session): session closed for user core Feb 12 20:43:49.773103 systemd[1]: sshd@3-172.24.4.230:22-172.24.4.1:55246.service: Deactivated successfully. Feb 12 20:43:49.774339 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:43:49.775547 systemd-logind[1050]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:43:49.777993 systemd[1]: Started sshd@4-172.24.4.230:22-172.24.4.1:55256.service. Feb 12 20:43:49.779864 systemd-logind[1050]: Removed session 4. Feb 12 20:43:50.891628 sshd[1158]: Accepted publickey for core from 172.24.4.1 port 55256 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:43:50.895133 sshd[1158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:43:50.901376 systemd[1]: Started session-5.scope. Feb 12 20:43:50.902051 systemd-logind[1050]: New session 5 of user core. Feb 12 20:43:51.486013 sudo[1161]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:43:51.487683 sudo[1161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:43:52.187774 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:43:52.197413 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:43:52.198104 systemd[1]: Reached target network-online.target. Feb 12 20:43:52.200704 systemd[1]: Starting docker.service... Feb 12 20:43:52.299170 env[1177]: time="2024-02-12T20:43:52.299081342Z" level=info msg="Starting up" Feb 12 20:43:52.302495 env[1177]: time="2024-02-12T20:43:52.302413505Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:43:52.302495 env[1177]: time="2024-02-12T20:43:52.302449238Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:43:52.302495 env[1177]: time="2024-02-12T20:43:52.302474591Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:43:52.302495 env[1177]: time="2024-02-12T20:43:52.302488820Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:43:52.305562 env[1177]: time="2024-02-12T20:43:52.305514890Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:43:52.305562 env[1177]: time="2024-02-12T20:43:52.305542520Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:43:52.305562 env[1177]: time="2024-02-12T20:43:52.305560951Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:43:52.305867 env[1177]: time="2024-02-12T20:43:52.305573687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:43:52.314541 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport610650310-merged.mount: Deactivated successfully. Feb 12 20:43:52.434521 env[1177]: time="2024-02-12T20:43:52.434418897Z" level=info msg="Loading containers: start." Feb 12 20:43:52.609798 kernel: Initializing XFRM netlink socket Feb 12 20:43:52.698439 env[1177]: time="2024-02-12T20:43:52.698376383Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:43:52.797814 systemd-networkd[980]: docker0: Link UP Feb 12 20:43:52.819003 env[1177]: time="2024-02-12T20:43:52.818936802Z" level=info msg="Loading containers: done." Feb 12 20:43:52.853388 env[1177]: time="2024-02-12T20:43:52.853346187Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:43:52.853852 env[1177]: time="2024-02-12T20:43:52.853834721Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:43:52.854069 env[1177]: time="2024-02-12T20:43:52.854053489Z" level=info msg="Daemon has completed initialization" Feb 12 20:43:52.901752 systemd[1]: Started docker.service. Feb 12 20:43:52.923662 env[1177]: time="2024-02-12T20:43:52.923556394Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:43:52.965488 systemd[1]: Reloading. Feb 12 20:43:53.101023 /usr/lib/systemd/system-generators/torcx-generator[1320]: time="2024-02-12T20:43:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:43:53.101385 /usr/lib/systemd/system-generators/torcx-generator[1320]: time="2024-02-12T20:43:53Z" level=info msg="torcx already run" Feb 12 20:43:53.189102 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:43:53.189125 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:43:53.217159 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:43:53.304785 systemd[1]: Started kubelet.service. Feb 12 20:43:53.445036 kubelet[1360]: E0212 20:43:53.444315 1360 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:43:53.450745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:43:53.451019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:43:54.326008 env[1059]: time="2024-02-12T20:43:54.325924241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 20:43:55.164190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845603250.mount: Deactivated successfully. Feb 12 20:43:58.217868 env[1059]: time="2024-02-12T20:43:58.217798558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:43:58.224410 env[1059]: time="2024-02-12T20:43:58.224341443Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:43:58.228155 env[1059]: time="2024-02-12T20:43:58.228097578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:43:58.229769 env[1059]: time="2024-02-12T20:43:58.229723344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:43:58.231976 env[1059]: time="2024-02-12T20:43:58.231921075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 20:43:58.252417 env[1059]: time="2024-02-12T20:43:58.252353725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 20:44:02.095949 env[1059]: time="2024-02-12T20:44:02.095804294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:02.101102 env[1059]: time="2024-02-12T20:44:02.100964126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:02.105673 env[1059]: time="2024-02-12T20:44:02.105589616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:02.112833 env[1059]: time="2024-02-12T20:44:02.112681848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:02.117861 env[1059]: time="2024-02-12T20:44:02.115635554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 20:44:02.141330 env[1059]: time="2024-02-12T20:44:02.141232664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 20:44:03.540470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:44:03.540989 systemd[1]: Stopped kubelet.service. Feb 12 20:44:03.544234 systemd[1]: Started kubelet.service. Feb 12 20:44:03.681151 kubelet[1389]: E0212 20:44:03.681081 1389 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:44:03.685436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:44:03.685571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:44:04.716422 env[1059]: time="2024-02-12T20:44:04.715757805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:04.727241 env[1059]: time="2024-02-12T20:44:04.727183855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:04.733593 env[1059]: time="2024-02-12T20:44:04.733484560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:04.740303 env[1059]: time="2024-02-12T20:44:04.740166163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:04.744811 env[1059]: time="2024-02-12T20:44:04.742676090Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 20:44:04.766302 env[1059]: time="2024-02-12T20:44:04.766256211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:44:07.187632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033287888.mount: Deactivated successfully. Feb 12 20:44:07.955045 env[1059]: time="2024-02-12T20:44:07.954874669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:07.958601 env[1059]: time="2024-02-12T20:44:07.958512999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:07.961836 env[1059]: time="2024-02-12T20:44:07.961756380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:07.964236 env[1059]: time="2024-02-12T20:44:07.964180686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:07.964923 env[1059]: time="2024-02-12T20:44:07.964869395Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:44:07.980150 env[1059]: time="2024-02-12T20:44:07.980085253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:44:08.704508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626209254.mount: Deactivated successfully. Feb 12 20:44:08.718791 env[1059]: time="2024-02-12T20:44:08.718400401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:08.723566 env[1059]: time="2024-02-12T20:44:08.723462228Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:08.727695 env[1059]: time="2024-02-12T20:44:08.727624373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:08.731102 env[1059]: time="2024-02-12T20:44:08.731009983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:08.732699 env[1059]: time="2024-02-12T20:44:08.732625905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:44:08.756477 env[1059]: time="2024-02-12T20:44:08.756403480Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 20:44:09.872118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091155629.mount: Deactivated successfully. Feb 12 20:44:13.797452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 20:44:13.798044 systemd[1]: Stopped kubelet.service. Feb 12 20:44:13.801783 systemd[1]: Started kubelet.service. Feb 12 20:44:13.913679 kubelet[1411]: E0212 20:44:13.913627 1411 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:44:13.916437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:44:13.916561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:44:17.225856 env[1059]: time="2024-02-12T20:44:17.225742477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:17.231021 env[1059]: time="2024-02-12T20:44:17.230430455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:17.236146 env[1059]: time="2024-02-12T20:44:17.236022772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:17.238372 env[1059]: time="2024-02-12T20:44:17.238318073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:17.239684 env[1059]: time="2024-02-12T20:44:17.239626265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 20:44:17.273279 env[1059]: time="2024-02-12T20:44:17.273206816Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 20:44:18.054280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125485869.mount: Deactivated successfully. Feb 12 20:44:19.509825 env[1059]: time="2024-02-12T20:44:19.509740404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:19.511806 env[1059]: time="2024-02-12T20:44:19.511774971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:19.513569 env[1059]: time="2024-02-12T20:44:19.513548438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:19.516418 env[1059]: time="2024-02-12T20:44:19.516369819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:19.521690 env[1059]: time="2024-02-12T20:44:19.521585281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 20:44:23.489119 systemd[1]: Stopped kubelet.service. Feb 12 20:44:23.515561 systemd[1]: Reloading. Feb 12 20:44:23.639497 /usr/lib/systemd/system-generators/torcx-generator[1510]: time="2024-02-12T20:44:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:44:23.639902 /usr/lib/systemd/system-generators/torcx-generator[1510]: time="2024-02-12T20:44:23Z" level=info msg="torcx already run" Feb 12 20:44:23.729832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:44:23.729855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:44:23.755468 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:44:23.857227 systemd[1]: Started kubelet.service. Feb 12 20:44:23.951315 kubelet[1552]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:44:23.951315 kubelet[1552]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:44:23.951790 kubelet[1552]: I0212 20:44:23.951438 1552 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:44:23.955514 kubelet[1552]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:44:23.955514 kubelet[1552]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:44:24.349251 kubelet[1552]: I0212 20:44:24.349205 1552 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:44:24.349548 kubelet[1552]: I0212 20:44:24.349524 1552 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:44:24.350299 kubelet[1552]: I0212 20:44:24.350269 1552 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:44:24.361282 kubelet[1552]: E0212 20:44:24.361231 1552 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.361282 kubelet[1552]: I0212 20:44:24.361276 1552 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:44:24.368470 kubelet[1552]: I0212 20:44:24.368407 1552 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:44:24.369201 kubelet[1552]: I0212 20:44:24.369175 1552 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:44:24.369481 kubelet[1552]: I0212 20:44:24.369456 1552 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:44:24.369786 kubelet[1552]: I0212 20:44:24.369761 1552 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:44:24.369942 kubelet[1552]: I0212 20:44:24.369922 1552 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:44:24.370264 kubelet[1552]: I0212 20:44:24.370235 1552 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:44:24.375906 kubelet[1552]: I0212 20:44:24.375874 1552 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:44:24.376106 kubelet[1552]: I0212 20:44:24.376084 1552 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:44:24.376260 kubelet[1552]: I0212 20:44:24.376239 1552 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:44:24.376401 kubelet[1552]: I0212 20:44:24.376381 1552 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:44:24.378307 kubelet[1552]: W0212 20:44:24.378240 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.378533 kubelet[1552]: E0212 20:44:24.378509 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.378923 kubelet[1552]: W0212 20:44:24.378838 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-8-90b6ad721e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.379129 kubelet[1552]: E0212 20:44:24.379104 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-8-90b6ad721e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.379434 kubelet[1552]: I0212 20:44:24.379389 1552 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:44:24.380087 kubelet[1552]: W0212 20:44:24.380060 1552 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:44:24.381042 kubelet[1552]: I0212 20:44:24.381016 1552 server.go:1186] "Started kubelet" Feb 12 20:44:24.383225 kubelet[1552]: E0212 20:44:24.383182 1552 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:44:24.383225 kubelet[1552]: E0212 20:44:24.383215 1552 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:44:24.383564 kubelet[1552]: E0212 20:44:24.383449 1552 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f0042d8a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 380979364, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 380979364, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.230:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.230:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:44:24.388058 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:44:24.388174 kubelet[1552]: I0212 20:44:24.384616 1552 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:44:24.388174 kubelet[1552]: I0212 20:44:24.385268 1552 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:44:24.388816 kubelet[1552]: I0212 20:44:24.388773 1552 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:44:24.392658 kubelet[1552]: I0212 20:44:24.392062 1552 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:44:24.392658 kubelet[1552]: I0212 20:44:24.392176 1552 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:44:24.392891 kubelet[1552]: W0212 20:44:24.392653 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.392891 kubelet[1552]: E0212 20:44:24.392693 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.393764 kubelet[1552]: E0212 20:44:24.393724 1552 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.24.4.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-8-90b6ad721e.novalocal?timeout=10s": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.444451 kubelet[1552]: I0212 20:44:24.444416 1552 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:44:24.444700 kubelet[1552]: I0212 20:44:24.444690 1552 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:44:24.444832 kubelet[1552]: I0212 20:44:24.444821 1552 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:44:24.450211 kubelet[1552]: I0212 20:44:24.450173 1552 policy_none.go:49] "None policy: Start" Feb 12 20:44:24.451569 kubelet[1552]: I0212 20:44:24.451543 1552 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:44:24.451644 kubelet[1552]: I0212 20:44:24.451581 1552 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:44:24.457657 systemd[1]: Created slice kubepods.slice. Feb 12 20:44:24.462495 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:44:24.465605 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:44:24.470545 kubelet[1552]: I0212 20:44:24.470511 1552 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:44:24.474615 kubelet[1552]: I0212 20:44:24.474577 1552 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:44:24.476670 kubelet[1552]: E0212 20:44:24.476569 1552 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-8-90b6ad721e.novalocal\" not found" Feb 12 20:44:24.488793 kubelet[1552]: I0212 20:44:24.488775 1552 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:44:24.494555 kubelet[1552]: I0212 20:44:24.494523 1552 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.495066 kubelet[1552]: E0212 20:44:24.495042 1552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.230:6443/api/v1/nodes\": dial tcp 172.24.4.230:6443: connect: connection refused" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.513353 kubelet[1552]: I0212 20:44:24.513321 1552 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:44:24.513353 kubelet[1552]: I0212 20:44:24.513347 1552 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:44:24.513473 kubelet[1552]: I0212 20:44:24.513372 1552 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:44:24.513473 kubelet[1552]: E0212 20:44:24.513421 1552 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:44:24.514615 kubelet[1552]: W0212 20:44:24.514558 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.514615 kubelet[1552]: E0212 20:44:24.514615 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.594377 kubelet[1552]: E0212 20:44:24.594304 1552 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.24.4.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-8-90b6ad721e.novalocal?timeout=10s": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:24.618144 kubelet[1552]: I0212 20:44:24.614292 1552 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:24.618144 kubelet[1552]: I0212 20:44:24.616790 1552 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:24.619144 kubelet[1552]: I0212 20:44:24.619107 1552 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:24.620692 kubelet[1552]: I0212 20:44:24.620616 1552 status_manager.go:698] "Failed to get status for pod" podUID=8e73a4f66a4b82d7f5b4fefbafbcc2e3 pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" err="Get \"https://172.24.4.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\": dial tcp 172.24.4.230:6443: connect: connection refused" Feb 12 20:44:24.623350 kubelet[1552]: I0212 20:44:24.623315 1552 status_manager.go:698] "Failed to get status for pod" podUID=db5ac8f97b84c45c5bb118857d303a8c pod="kube-system/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal" err="Get \"https://172.24.4.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal\": dial tcp 172.24.4.230:6443: connect: connection refused" Feb 12 20:44:24.626065 kubelet[1552]: I0212 20:44:24.625688 1552 status_manager.go:698] "Failed to get status for pod" podUID=409479b7c0ddc76f8029966d1d8aebc5 pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" err="Get \"https://172.24.4.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\": dial tcp 172.24.4.230:6443: connect: connection refused" Feb 12 20:44:24.629852 systemd[1]: Created slice kubepods-burstable-pod8e73a4f66a4b82d7f5b4fefbafbcc2e3.slice. Feb 12 20:44:24.642372 systemd[1]: Created slice kubepods-burstable-poddb5ac8f97b84c45c5bb118857d303a8c.slice. Feb 12 20:44:24.650113 systemd[1]: Created slice kubepods-burstable-pod409479b7c0ddc76f8029966d1d8aebc5.slice. Feb 12 20:44:24.694448 kubelet[1552]: I0212 20:44:24.694407 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.694625 kubelet[1552]: I0212 20:44:24.694609 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.694991 kubelet[1552]: I0212 20:44:24.694961 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.695159 kubelet[1552]: I0212 20:44:24.695107 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e73a4f66a4b82d7f5b4fefbafbcc2e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"8e73a4f66a4b82d7f5b4fefbafbcc2e3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.695595 kubelet[1552]: I0212 20:44:24.695320 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.695646 kubelet[1552]: I0212 20:44:24.695630 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.696768 kubelet[1552]: I0212 20:44:24.695875 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db5ac8f97b84c45c5bb118857d303a8c-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"db5ac8f97b84c45c5bb118857d303a8c\") " pod="kube-system/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.696835 kubelet[1552]: I0212 20:44:24.696788 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e73a4f66a4b82d7f5b4fefbafbcc2e3-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"8e73a4f66a4b82d7f5b4fefbafbcc2e3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.696952 kubelet[1552]: I0212 20:44:24.696904 1552 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e73a4f66a4b82d7f5b4fefbafbcc2e3-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"8e73a4f66a4b82d7f5b4fefbafbcc2e3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.697230 kubelet[1552]: I0212 20:44:24.697187 1552 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.698001 kubelet[1552]: E0212 20:44:24.697965 1552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.230:6443/api/v1/nodes\": dial tcp 172.24.4.230:6443: connect: connection refused" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:24.809148 update_engine[1051]: I0212 20:44:24.808984 1051 update_attempter.cc:509] Updating boot flags... Feb 12 20:44:24.941175 env[1059]: time="2024-02-12T20:44:24.940191550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal,Uid:8e73a4f66a4b82d7f5b4fefbafbcc2e3,Namespace:kube-system,Attempt:0,}" Feb 12 20:44:24.947753 env[1059]: time="2024-02-12T20:44:24.947393615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal,Uid:db5ac8f97b84c45c5bb118857d303a8c,Namespace:kube-system,Attempt:0,}" Feb 12 20:44:24.953542 env[1059]: time="2024-02-12T20:44:24.953397295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal,Uid:409479b7c0ddc76f8029966d1d8aebc5,Namespace:kube-system,Attempt:0,}" Feb 12 20:44:24.996001 kubelet[1552]: E0212 20:44:24.995950 1552 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.24.4.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-8-90b6ad721e.novalocal?timeout=10s": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.101981 kubelet[1552]: I0212 20:44:25.101567 1552 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:25.102136 kubelet[1552]: E0212 20:44:25.102101 1552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.230:6443/api/v1/nodes\": dial tcp 172.24.4.230:6443: connect: connection refused" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:25.294542 kubelet[1552]: W0212 20:44:25.294292 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-8-90b6ad721e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.294542 kubelet[1552]: E0212 20:44:25.294464 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-8-90b6ad721e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.393843 kubelet[1552]: W0212 20:44:25.393682 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.393843 kubelet[1552]: E0212 20:44:25.393846 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.574264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025021742.mount: Deactivated successfully. Feb 12 20:44:25.583494 env[1059]: time="2024-02-12T20:44:25.583419477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.595078 env[1059]: time="2024-02-12T20:44:25.595015309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.601193 env[1059]: time="2024-02-12T20:44:25.601156336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.603418 env[1059]: time="2024-02-12T20:44:25.603371262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.606179 env[1059]: time="2024-02-12T20:44:25.606145043Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.609071 env[1059]: time="2024-02-12T20:44:25.609035880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.615291 env[1059]: time="2024-02-12T20:44:25.615228297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.617872 env[1059]: time="2024-02-12T20:44:25.617846705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.619432 env[1059]: time="2024-02-12T20:44:25.619384046Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.621152 env[1059]: time="2024-02-12T20:44:25.621096706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.622778 env[1059]: time="2024-02-12T20:44:25.622753936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.626523 env[1059]: time="2024-02-12T20:44:25.626478743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:25.654146 env[1059]: time="2024-02-12T20:44:25.653940041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:44:25.654146 env[1059]: time="2024-02-12T20:44:25.653989478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:44:25.654146 env[1059]: time="2024-02-12T20:44:25.654004371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:44:25.654669 env[1059]: time="2024-02-12T20:44:25.654584273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/427ec1cbf7ce4decb567e43942a6d754a5ec411cf72c4b51d0b89583059cfe80 pid=1642 runtime=io.containerd.runc.v2 Feb 12 20:44:25.691588 env[1059]: time="2024-02-12T20:44:25.691476817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:44:25.691588 env[1059]: time="2024-02-12T20:44:25.691534431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:44:25.691588 env[1059]: time="2024-02-12T20:44:25.691551889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:44:25.692345 env[1059]: time="2024-02-12T20:44:25.692072394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd3fc8ec035cf1519170e841de0a044767205ba29ce4ad2f33dc1a81d8077714 pid=1664 runtime=io.containerd.runc.v2 Feb 12 20:44:25.694908 systemd[1]: Started cri-containerd-427ec1cbf7ce4decb567e43942a6d754a5ec411cf72c4b51d0b89583059cfe80.scope. Feb 12 20:44:25.700966 kubelet[1552]: W0212 20:44:25.700912 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.700966 kubelet[1552]: E0212 20:44:25.700968 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.701164 env[1059]: time="2024-02-12T20:44:25.700066435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:44:25.701164 env[1059]: time="2024-02-12T20:44:25.700154926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:44:25.701164 env[1059]: time="2024-02-12T20:44:25.700171131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:44:25.701164 env[1059]: time="2024-02-12T20:44:25.700357934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/775050bfb8c69dee7bb22af1d824d00af2a538f3a016eaf0890951c0db718792 pid=1678 runtime=io.containerd.runc.v2 Feb 12 20:44:25.732721 systemd[1]: Started cri-containerd-dd3fc8ec035cf1519170e841de0a044767205ba29ce4ad2f33dc1a81d8077714.scope. Feb 12 20:44:25.750200 systemd[1]: Started cri-containerd-775050bfb8c69dee7bb22af1d824d00af2a538f3a016eaf0890951c0db718792.scope. Feb 12 20:44:25.763304 kubelet[1552]: W0212 20:44:25.763039 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.763304 kubelet[1552]: E0212 20:44:25.763270 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.796949 kubelet[1552]: E0212 20:44:25.796876 1552 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.24.4.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-8-90b6ad721e.novalocal?timeout=10s": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:25.801355 env[1059]: time="2024-02-12T20:44:25.801239312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal,Uid:409479b7c0ddc76f8029966d1d8aebc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"427ec1cbf7ce4decb567e43942a6d754a5ec411cf72c4b51d0b89583059cfe80\"" Feb 12 20:44:25.809841 env[1059]: time="2024-02-12T20:44:25.808600816Z" level=info msg="CreateContainer within sandbox \"427ec1cbf7ce4decb567e43942a6d754a5ec411cf72c4b51d0b89583059cfe80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:44:25.842331 env[1059]: time="2024-02-12T20:44:25.842210166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal,Uid:8e73a4f66a4b82d7f5b4fefbafbcc2e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd3fc8ec035cf1519170e841de0a044767205ba29ce4ad2f33dc1a81d8077714\"" Feb 12 20:44:25.844970 env[1059]: time="2024-02-12T20:44:25.844867027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal,Uid:db5ac8f97b84c45c5bb118857d303a8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"775050bfb8c69dee7bb22af1d824d00af2a538f3a016eaf0890951c0db718792\"" Feb 12 20:44:25.847778 env[1059]: time="2024-02-12T20:44:25.847249744Z" level=info msg="CreateContainer within sandbox \"dd3fc8ec035cf1519170e841de0a044767205ba29ce4ad2f33dc1a81d8077714\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:44:25.848071 env[1059]: time="2024-02-12T20:44:25.848043970Z" level=info msg="CreateContainer within sandbox \"775050bfb8c69dee7bb22af1d824d00af2a538f3a016eaf0890951c0db718792\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:44:25.906220 kubelet[1552]: I0212 20:44:25.906157 1552 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:25.907162 kubelet[1552]: E0212 20:44:25.907113 1552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.230:6443/api/v1/nodes\": dial tcp 172.24.4.230:6443: connect: connection refused" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:26.006054 env[1059]: time="2024-02-12T20:44:26.005918836Z" level=info msg="CreateContainer within sandbox \"dd3fc8ec035cf1519170e841de0a044767205ba29ce4ad2f33dc1a81d8077714\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"964e13eed3fc9d54da559a5e06df00424642b37fa8eab7683520ed8725e8733e\"" Feb 12 20:44:26.008319 env[1059]: time="2024-02-12T20:44:26.008180841Z" level=info msg="StartContainer for \"964e13eed3fc9d54da559a5e06df00424642b37fa8eab7683520ed8725e8733e\"" Feb 12 20:44:26.008882 env[1059]: time="2024-02-12T20:44:26.008810723Z" level=info msg="CreateContainer within sandbox \"427ec1cbf7ce4decb567e43942a6d754a5ec411cf72c4b51d0b89583059cfe80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"922d5e191bd7025ef8ba041438974d796304fa4837951541d71b6137cfd0f00c\"" Feb 12 20:44:26.009908 env[1059]: time="2024-02-12T20:44:26.009857201Z" level=info msg="StartContainer for \"922d5e191bd7025ef8ba041438974d796304fa4837951541d71b6137cfd0f00c\"" Feb 12 20:44:26.019115 env[1059]: time="2024-02-12T20:44:26.019042488Z" level=info msg="CreateContainer within sandbox \"775050bfb8c69dee7bb22af1d824d00af2a538f3a016eaf0890951c0db718792\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db428367649f01716d699251a6a1e97dd7a05a01b2d0ae1b785755e3914cb440\"" Feb 12 20:44:26.020465 env[1059]: time="2024-02-12T20:44:26.020415375Z" level=info msg="StartContainer for \"db428367649f01716d699251a6a1e97dd7a05a01b2d0ae1b785755e3914cb440\"" Feb 12 20:44:26.062410 systemd[1]: Started cri-containerd-964e13eed3fc9d54da559a5e06df00424642b37fa8eab7683520ed8725e8733e.scope. Feb 12 20:44:26.077100 systemd[1]: Started cri-containerd-922d5e191bd7025ef8ba041438974d796304fa4837951541d71b6137cfd0f00c.scope. Feb 12 20:44:26.087293 systemd[1]: Started cri-containerd-db428367649f01716d699251a6a1e97dd7a05a01b2d0ae1b785755e3914cb440.scope. Feb 12 20:44:26.160170 env[1059]: time="2024-02-12T20:44:26.159975328Z" level=info msg="StartContainer for \"db428367649f01716d699251a6a1e97dd7a05a01b2d0ae1b785755e3914cb440\" returns successfully" Feb 12 20:44:26.177408 env[1059]: time="2024-02-12T20:44:26.177343510Z" level=info msg="StartContainer for \"964e13eed3fc9d54da559a5e06df00424642b37fa8eab7683520ed8725e8733e\" returns successfully" Feb 12 20:44:26.178850 env[1059]: time="2024-02-12T20:44:26.178810539Z" level=info msg="StartContainer for \"922d5e191bd7025ef8ba041438974d796304fa4837951541d71b6137cfd0f00c\" returns successfully" Feb 12 20:44:26.413998 kubelet[1552]: E0212 20:44:26.413917 1552 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:26.524398 kubelet[1552]: I0212 20:44:26.524372 1552 status_manager.go:698] "Failed to get status for pod" podUID=db5ac8f97b84c45c5bb118857d303a8c pod="kube-system/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal" err="Get \"https://172.24.4.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal\": dial tcp 172.24.4.230:6443: connect: connection refused" Feb 12 20:44:26.528229 kubelet[1552]: I0212 20:44:26.528210 1552 status_manager.go:698] "Failed to get status for pod" podUID=8e73a4f66a4b82d7f5b4fefbafbcc2e3 pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" err="Get \"https://172.24.4.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\": dial tcp 172.24.4.230:6443: connect: connection refused" Feb 12 20:44:26.578480 kubelet[1552]: I0212 20:44:26.578438 1552 status_manager.go:698] "Failed to get status for pod" podUID=409479b7c0ddc76f8029966d1d8aebc5 pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" err="Get \"https://172.24.4.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\": dial tcp 172.24.4.230:6443: connect: connection refused" Feb 12 20:44:27.398442 kubelet[1552]: E0212 20:44:27.398403 1552 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.24.4.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-8-90b6ad721e.novalocal?timeout=10s": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:27.484423 kubelet[1552]: W0212 20:44:27.484200 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:27.485363 kubelet[1552]: E0212 20:44:27.485329 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:27.511226 kubelet[1552]: I0212 20:44:27.511185 1552 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:27.512104 kubelet[1552]: E0212 20:44:27.512071 1552 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.230:6443/api/v1/nodes\": dial tcp 172.24.4.230:6443: connect: connection refused" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:27.865796 kubelet[1552]: W0212 20:44:27.865553 1552 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:27.865796 kubelet[1552]: E0212 20:44:27.865690 1552 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.230:6443: connect: connection refused Feb 12 20:44:30.614221 kubelet[1552]: E0212 20:44:30.614118 1552 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-8-90b6ad721e.novalocal\" not found" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:30.716427 kubelet[1552]: I0212 20:44:30.716380 1552 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:30.734965 kubelet[1552]: I0212 20:44:30.734905 1552 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:31.098854 kubelet[1552]: E0212 20:44:31.098644 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f0042d8a4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 380979364, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 380979364, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.157359 kubelet[1552]: E0212 20:44:31.157061 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f0064c5e5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 383202789, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 383202789, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.218607 kubelet[1552]: E0212 20:44:31.218375 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f0400f4c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443770053, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443770053, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.276378 kubelet[1552]: E0212 20:44:31.276124 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f04012c98", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443784344, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443784344, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.334691 kubelet[1552]: E0212 20:44:31.334522 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f04013d8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443788684, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443788684, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.381785 kubelet[1552]: I0212 20:44:31.381541 1552 apiserver.go:52] "Watching apiserver" Feb 12 20:44:31.393028 kubelet[1552]: I0212 20:44:31.392980 1552 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:44:31.394667 kubelet[1552]: E0212 20:44:31.394472 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f05d482fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 474411773, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 474411773, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.442798 kubelet[1552]: I0212 20:44:31.442745 1552 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:44:31.471313 kubelet[1552]: E0212 20:44:31.470691 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f0400f4c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443770053, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 494460472, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.530787 kubelet[1552]: E0212 20:44:31.530669 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f04012c98", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443784344, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 494466416, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.586015 kubelet[1552]: E0212 20:44:31.585894 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f04013d8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443788684, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 494469993, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:31.897166 kubelet[1552]: E0212 20:44:31.897054 1552 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-8-90b6ad721e.novalocal.17b3385f0400f4c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-8-90b6ad721e.novalocal", UID:"ci-3510-3-2-8-90b6ad721e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-8-90b6ad721e.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-8-90b6ad721e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 443770053, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 44, 24, 616543344, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:44:33.581804 systemd[1]: Reloading. Feb 12 20:44:33.719048 /usr/lib/systemd/system-generators/torcx-generator[1897]: time="2024-02-12T20:44:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:44:33.719098 /usr/lib/systemd/system-generators/torcx-generator[1897]: time="2024-02-12T20:44:33Z" level=info msg="torcx already run" Feb 12 20:44:33.801574 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:44:33.801760 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:44:33.836214 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:44:33.968688 systemd[1]: Stopping kubelet.service... Feb 12 20:44:33.990761 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:44:33.991227 systemd[1]: Stopped kubelet.service. Feb 12 20:44:33.991345 systemd[1]: kubelet.service: Consumed 1.067s CPU time. Feb 12 20:44:33.995266 systemd[1]: Started kubelet.service. Feb 12 20:44:34.108746 sudo[1947]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:44:34.109048 sudo[1947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:44:34.127003 kubelet[1937]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:44:34.127003 kubelet[1937]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:44:34.127003 kubelet[1937]: I0212 20:44:34.123939 1937 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:44:34.127003 kubelet[1937]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:44:34.127003 kubelet[1937]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:44:34.134148 kubelet[1937]: I0212 20:44:34.134104 1937 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:44:34.134148 kubelet[1937]: I0212 20:44:34.134136 1937 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:44:34.134395 kubelet[1937]: I0212 20:44:34.134376 1937 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:44:34.136404 kubelet[1937]: I0212 20:44:34.136381 1937 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:44:34.137508 kubelet[1937]: I0212 20:44:34.137485 1937 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:44:34.141931 kubelet[1937]: I0212 20:44:34.141903 1937 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:44:34.142173 kubelet[1937]: I0212 20:44:34.142155 1937 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:44:34.142261 kubelet[1937]: I0212 20:44:34.142242 1937 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:44:34.142365 kubelet[1937]: I0212 20:44:34.142271 1937 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:44:34.142365 kubelet[1937]: I0212 20:44:34.142285 1937 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:44:34.142365 kubelet[1937]: I0212 20:44:34.142324 1937 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:44:34.147206 kubelet[1937]: I0212 20:44:34.147182 1937 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:44:34.147275 kubelet[1937]: I0212 20:44:34.147211 1937 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:44:34.147275 kubelet[1937]: I0212 20:44:34.147236 1937 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:44:34.147275 kubelet[1937]: I0212 20:44:34.147251 1937 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:44:34.176908 kubelet[1937]: I0212 20:44:34.176885 1937 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:44:34.177444 kubelet[1937]: I0212 20:44:34.177432 1937 server.go:1186] "Started kubelet" Feb 12 20:44:34.179143 kubelet[1937]: I0212 20:44:34.179129 1937 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:44:34.179377 kubelet[1937]: I0212 20:44:34.179363 1937 apiserver.go:52] "Watching apiserver" Feb 12 20:44:34.182742 kubelet[1937]: I0212 20:44:34.181838 1937 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:44:34.185366 kubelet[1937]: I0212 20:44:34.185350 1937 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:44:34.187168 kubelet[1937]: E0212 20:44:34.187154 1937 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:44:34.187296 kubelet[1937]: E0212 20:44:34.187286 1937 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:44:34.193678 kubelet[1937]: I0212 20:44:34.193657 1937 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:44:34.195130 kubelet[1937]: I0212 20:44:34.195116 1937 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:44:34.221481 kubelet[1937]: I0212 20:44:34.221462 1937 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:44:34.271665 kubelet[1937]: I0212 20:44:34.271647 1937 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:44:34.271880 kubelet[1937]: I0212 20:44:34.271869 1937 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:44:34.271965 kubelet[1937]: I0212 20:44:34.271955 1937 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:44:34.272189 kubelet[1937]: I0212 20:44:34.272177 1937 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:44:34.272291 kubelet[1937]: I0212 20:44:34.272280 1937 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:44:34.272373 kubelet[1937]: I0212 20:44:34.272363 1937 policy_none.go:49] "None policy: Start" Feb 12 20:44:34.273247 kubelet[1937]: I0212 20:44:34.273235 1937 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:44:34.273356 kubelet[1937]: I0212 20:44:34.273346 1937 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:44:34.273574 kubelet[1937]: I0212 20:44:34.273563 1937 state_mem.go:75] "Updated machine memory state" Feb 12 20:44:34.280443 kubelet[1937]: I0212 20:44:34.280422 1937 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:44:34.280813 kubelet[1937]: I0212 20:44:34.280800 1937 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:44:34.309939 kubelet[1937]: I0212 20:44:34.309920 1937 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.322253 kubelet[1937]: I0212 20:44:34.322227 1937 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.322499 kubelet[1937]: I0212 20:44:34.322487 1937 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.331648 kubelet[1937]: I0212 20:44:34.331628 1937 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:44:34.331895 kubelet[1937]: I0212 20:44:34.331871 1937 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:44:34.332149 kubelet[1937]: I0212 20:44:34.332135 1937 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:44:34.332292 kubelet[1937]: E0212 20:44:34.332280 1937 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:44:34.433322 kubelet[1937]: I0212 20:44:34.433132 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:34.433845 kubelet[1937]: I0212 20:44:34.433792 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:34.435447 kubelet[1937]: I0212 20:44:34.435130 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:34.496966 kubelet[1937]: I0212 20:44:34.496894 1937 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:44:34.506192 kubelet[1937]: I0212 20:44:34.506145 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db5ac8f97b84c45c5bb118857d303a8c-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"db5ac8f97b84c45c5bb118857d303a8c\") " pod="kube-system/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.506572 kubelet[1937]: I0212 20:44:34.506216 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e73a4f66a4b82d7f5b4fefbafbcc2e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"8e73a4f66a4b82d7f5b4fefbafbcc2e3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.506572 kubelet[1937]: I0212 20:44:34.506258 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.506572 kubelet[1937]: I0212 20:44:34.506289 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.506572 kubelet[1937]: I0212 20:44:34.506329 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.507042 kubelet[1937]: I0212 20:44:34.506372 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.507042 kubelet[1937]: I0212 20:44:34.506425 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e73a4f66a4b82d7f5b4fefbafbcc2e3-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"8e73a4f66a4b82d7f5b4fefbafbcc2e3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.507042 kubelet[1937]: I0212 20:44:34.506588 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e73a4f66a4b82d7f5b4fefbafbcc2e3-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"8e73a4f66a4b82d7f5b4fefbafbcc2e3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.507042 kubelet[1937]: I0212 20:44:34.506623 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/409479b7c0ddc76f8029966d1d8aebc5-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" (UID: \"409479b7c0ddc76f8029966d1d8aebc5\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.507042 kubelet[1937]: I0212 20:44:34.506640 1937 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:44:34.563390 kubelet[1937]: E0212 20:44:34.563323 1937 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.764430 kubelet[1937]: E0212 20:44:34.764246 1937 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" Feb 12 20:44:34.873845 sudo[1947]: pam_unix(sudo:session): session closed for user root Feb 12 20:44:35.197034 kubelet[1937]: I0212 20:44:35.196943 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-8-90b6ad721e.novalocal" podStartSLOduration=3.195923205 pod.CreationTimestamp="2024-02-12 20:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:44:35.195912112 +0000 UTC m=+1.180575596" watchObservedRunningTime="2024-02-12 20:44:35.195923205 +0000 UTC m=+1.180586619" Feb 12 20:44:35.972125 kubelet[1937]: I0212 20:44:35.972058 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-8-90b6ad721e.novalocal" podStartSLOduration=3.97193293 pod.CreationTimestamp="2024-02-12 20:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:44:35.560325099 +0000 UTC m=+1.544988523" watchObservedRunningTime="2024-02-12 20:44:35.97193293 +0000 UTC m=+1.956596394" Feb 12 20:44:37.343388 sudo[1161]: pam_unix(sudo:session): session closed for user root Feb 12 20:44:37.561987 kubelet[1937]: I0212 20:44:37.561935 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-8-90b6ad721e.novalocal" podStartSLOduration=3.561851517 pod.CreationTimestamp="2024-02-12 20:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:44:37.162095357 +0000 UTC m=+3.146758781" watchObservedRunningTime="2024-02-12 20:44:37.561851517 +0000 UTC m=+3.546514981" Feb 12 20:44:37.565085 sshd[1158]: pam_unix(sshd:session): session closed for user core Feb 12 20:44:37.571271 systemd[1]: sshd@4-172.24.4.230:22-172.24.4.1:55256.service: Deactivated successfully. Feb 12 20:44:37.573142 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:44:37.573509 systemd[1]: session-5.scope: Consumed 6.609s CPU time. Feb 12 20:44:37.574901 systemd-logind[1050]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:44:37.577514 systemd-logind[1050]: Removed session 5. Feb 12 20:44:46.073687 kubelet[1937]: I0212 20:44:46.073648 1937 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:44:46.074992 env[1059]: time="2024-02-12T20:44:46.074872505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:44:46.075513 kubelet[1937]: I0212 20:44:46.075482 1937 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:44:46.861236 kubelet[1937]: I0212 20:44:46.861054 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:46.866864 systemd[1]: Created slice kubepods-besteffort-pod14a5e9b2_c5ee_45ce_b5a9_4fa3fc6e003c.slice. Feb 12 20:44:46.878860 kubelet[1937]: I0212 20:44:46.878783 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:46.886100 systemd[1]: Created slice kubepods-burstable-podc860114b_c7ba_474e_9cda_721b1119a31a.slice. Feb 12 20:44:46.900321 kubelet[1937]: I0212 20:44:46.900276 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c-xtables-lock\") pod \"kube-proxy-lb65b\" (UID: \"14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c\") " pod="kube-system/kube-proxy-lb65b" Feb 12 20:44:46.900475 kubelet[1937]: I0212 20:44:46.900368 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-run\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900475 kubelet[1937]: I0212 20:44:46.900420 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cni-path\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900475 kubelet[1937]: I0212 20:44:46.900449 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-lib-modules\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900577 kubelet[1937]: I0212 20:44:46.900519 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c-lib-modules\") pod \"kube-proxy-lb65b\" (UID: \"14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c\") " pod="kube-system/kube-proxy-lb65b" Feb 12 20:44:46.900577 kubelet[1937]: I0212 20:44:46.900555 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-bpf-maps\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900642 kubelet[1937]: I0212 20:44:46.900606 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-hostproc\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900642 kubelet[1937]: I0212 20:44:46.900637 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c860114b-c7ba-474e-9cda-721b1119a31a-clustermesh-secrets\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900706 kubelet[1937]: I0212 20:44:46.900680 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-net\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900789 kubelet[1937]: I0212 20:44:46.900743 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-hubble-tls\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900827 kubelet[1937]: I0212 20:44:46.900791 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztf7\" (UniqueName: \"kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-kube-api-access-gztf7\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900862 kubelet[1937]: I0212 20:44:46.900856 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-config-path\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.900930 kubelet[1937]: I0212 20:44:46.900907 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hr6w\" (UniqueName: \"kubernetes.io/projected/14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c-kube-api-access-2hr6w\") pod \"kube-proxy-lb65b\" (UID: \"14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c\") " pod="kube-system/kube-proxy-lb65b" Feb 12 20:44:46.900976 kubelet[1937]: I0212 20:44:46.900939 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-etc-cni-netd\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.901014 kubelet[1937]: I0212 20:44:46.900965 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-kernel\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.901014 kubelet[1937]: I0212 20:44:46.901008 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c-kube-proxy\") pod \"kube-proxy-lb65b\" (UID: \"14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c\") " pod="kube-system/kube-proxy-lb65b" Feb 12 20:44:46.901078 kubelet[1937]: I0212 20:44:46.901034 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-cgroup\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:46.901078 kubelet[1937]: I0212 20:44:46.901077 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-xtables-lock\") pod \"cilium-v557h\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " pod="kube-system/cilium-v557h" Feb 12 20:44:47.042184 kubelet[1937]: I0212 20:44:47.042102 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:44:47.047201 systemd[1]: Created slice kubepods-besteffort-poda43228a8_2bb8_42cd_b7d0_fb83db9c1926.slice. Feb 12 20:44:47.101834 kubelet[1937]: I0212 20:44:47.101788 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-q7wnl\" (UID: \"a43228a8-2bb8-42cd-b7d0-fb83db9c1926\") " pod="kube-system/cilium-operator-f59cbd8c6-q7wnl" Feb 12 20:44:47.102202 kubelet[1937]: I0212 20:44:47.101852 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22gxm\" (UniqueName: \"kubernetes.io/projected/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-kube-api-access-22gxm\") pod \"cilium-operator-f59cbd8c6-q7wnl\" (UID: \"a43228a8-2bb8-42cd-b7d0-fb83db9c1926\") " pod="kube-system/cilium-operator-f59cbd8c6-q7wnl" Feb 12 20:44:47.180096 env[1059]: time="2024-02-12T20:44:47.178554146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lb65b,Uid:14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c,Namespace:kube-system,Attempt:0,}" Feb 12 20:44:47.200960 env[1059]: time="2024-02-12T20:44:47.200886782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v557h,Uid:c860114b-c7ba-474e-9cda-721b1119a31a,Namespace:kube-system,Attempt:0,}" Feb 12 20:44:47.224606 env[1059]: time="2024-02-12T20:44:47.224394520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:44:47.224606 env[1059]: time="2024-02-12T20:44:47.224516073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:44:47.225323 env[1059]: time="2024-02-12T20:44:47.225011801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:44:47.225879 env[1059]: time="2024-02-12T20:44:47.225666926Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fe032d7aec218da1c6efee0e8c3258571bdd01cf5dbdf487b34422b751a752e pid=2040 runtime=io.containerd.runc.v2 Feb 12 20:44:47.264089 env[1059]: time="2024-02-12T20:44:47.262697474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:44:47.264089 env[1059]: time="2024-02-12T20:44:47.262751621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:44:47.264089 env[1059]: time="2024-02-12T20:44:47.262765740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:44:47.264089 env[1059]: time="2024-02-12T20:44:47.262920949Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda pid=2064 runtime=io.containerd.runc.v2 Feb 12 20:44:47.271156 systemd[1]: Started cri-containerd-1fe032d7aec218da1c6efee0e8c3258571bdd01cf5dbdf487b34422b751a752e.scope. Feb 12 20:44:47.288263 systemd[1]: Started cri-containerd-1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda.scope. Feb 12 20:44:47.328279 env[1059]: time="2024-02-12T20:44:47.328202637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v557h,Uid:c860114b-c7ba-474e-9cda-721b1119a31a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\"" Feb 12 20:44:47.347107 env[1059]: time="2024-02-12T20:44:47.347066470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lb65b,Uid:14a5e9b2-c5ee-45ce-b5a9-4fa3fc6e003c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fe032d7aec218da1c6efee0e8c3258571bdd01cf5dbdf487b34422b751a752e\"" Feb 12 20:44:47.349643 env[1059]: time="2024-02-12T20:44:47.349613597Z" level=info msg="CreateContainer within sandbox \"1fe032d7aec218da1c6efee0e8c3258571bdd01cf5dbdf487b34422b751a752e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:44:47.384576 env[1059]: time="2024-02-12T20:44:47.384535321Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:44:47.490439 env[1059]: time="2024-02-12T20:44:47.490154778Z" level=info msg="CreateContainer within sandbox \"1fe032d7aec218da1c6efee0e8c3258571bdd01cf5dbdf487b34422b751a752e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a77e163fb63221b14f555a8cbc63956afcca84460c843c2b40dba1076d51b4be\"" Feb 12 20:44:47.494894 env[1059]: time="2024-02-12T20:44:47.494054679Z" level=info msg="StartContainer for \"a77e163fb63221b14f555a8cbc63956afcca84460c843c2b40dba1076d51b4be\"" Feb 12 20:44:47.534123 systemd[1]: Started cri-containerd-a77e163fb63221b14f555a8cbc63956afcca84460c843c2b40dba1076d51b4be.scope. Feb 12 20:44:47.592684 env[1059]: time="2024-02-12T20:44:47.592627573Z" level=info msg="StartContainer for \"a77e163fb63221b14f555a8cbc63956afcca84460c843c2b40dba1076d51b4be\" returns successfully" Feb 12 20:44:47.651983 env[1059]: time="2024-02-12T20:44:47.651855577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-q7wnl,Uid:a43228a8-2bb8-42cd-b7d0-fb83db9c1926,Namespace:kube-system,Attempt:0,}" Feb 12 20:44:47.701765 env[1059]: time="2024-02-12T20:44:47.701286517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:44:47.701765 env[1059]: time="2024-02-12T20:44:47.701386266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:44:47.701765 env[1059]: time="2024-02-12T20:44:47.701418881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:44:47.702378 env[1059]: time="2024-02-12T20:44:47.702299145Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24 pid=2154 runtime=io.containerd.runc.v2 Feb 12 20:44:47.738485 systemd[1]: Started cri-containerd-35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24.scope. Feb 12 20:44:47.807633 env[1059]: time="2024-02-12T20:44:47.807576108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-q7wnl,Uid:a43228a8-2bb8-42cd-b7d0-fb83db9c1926,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\"" Feb 12 20:44:54.798340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075256961.mount: Deactivated successfully. Feb 12 20:44:59.580850 env[1059]: time="2024-02-12T20:44:59.580670632Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:59.587974 env[1059]: time="2024-02-12T20:44:59.587895792Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:59.593345 env[1059]: time="2024-02-12T20:44:59.593285017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:44:59.597014 env[1059]: time="2024-02-12T20:44:59.595815427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:44:59.604132 env[1059]: time="2024-02-12T20:44:59.603941577Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:44:59.609342 env[1059]: time="2024-02-12T20:44:59.609223492Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:44:59.641425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357242165.mount: Deactivated successfully. Feb 12 20:44:59.660473 env[1059]: time="2024-02-12T20:44:59.660395437Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\"" Feb 12 20:44:59.663865 env[1059]: time="2024-02-12T20:44:59.661846828Z" level=info msg="StartContainer for \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\"" Feb 12 20:44:59.730996 systemd[1]: Started cri-containerd-367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6.scope. Feb 12 20:44:59.775591 env[1059]: time="2024-02-12T20:44:59.775510632Z" level=info msg="StartContainer for \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\" returns successfully" Feb 12 20:44:59.781791 systemd[1]: cri-containerd-367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6.scope: Deactivated successfully. Feb 12 20:45:00.165952 env[1059]: time="2024-02-12T20:45:00.165865053Z" level=info msg="shim disconnected" id=367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6 Feb 12 20:45:00.166370 env[1059]: time="2024-02-12T20:45:00.166299464Z" level=warning msg="cleaning up after shim disconnected" id=367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6 namespace=k8s.io Feb 12 20:45:00.166538 env[1059]: time="2024-02-12T20:45:00.166504677Z" level=info msg="cleaning up dead shim" Feb 12 20:45:00.188859 env[1059]: time="2024-02-12T20:45:00.188787989Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:45:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2345 runtime=io.containerd.runc.v2\n" Feb 12 20:45:00.440787 env[1059]: time="2024-02-12T20:45:00.439877982Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:45:00.465637 kubelet[1937]: I0212 20:45:00.465535 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lb65b" podStartSLOduration=14.465394629 pod.CreationTimestamp="2024-02-12 20:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:44:48.424142434 +0000 UTC m=+14.408805888" watchObservedRunningTime="2024-02-12 20:45:00.465394629 +0000 UTC m=+26.450058094" Feb 12 20:45:00.475277 env[1059]: time="2024-02-12T20:45:00.474998076Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\"" Feb 12 20:45:00.479233 env[1059]: time="2024-02-12T20:45:00.479143211Z" level=info msg="StartContainer for \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\"" Feb 12 20:45:00.520092 systemd[1]: Started cri-containerd-7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95.scope. Feb 12 20:45:00.558092 env[1059]: time="2024-02-12T20:45:00.558033786Z" level=info msg="StartContainer for \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\" returns successfully" Feb 12 20:45:00.571131 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:45:00.571404 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:45:00.571822 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:45:00.575949 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:45:00.582977 systemd[1]: cri-containerd-7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95.scope: Deactivated successfully. Feb 12 20:45:00.624518 env[1059]: time="2024-02-12T20:45:00.624465220Z" level=info msg="shim disconnected" id=7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95 Feb 12 20:45:00.624989 env[1059]: time="2024-02-12T20:45:00.624968688Z" level=warning msg="cleaning up after shim disconnected" id=7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95 namespace=k8s.io Feb 12 20:45:00.625058 env[1059]: time="2024-02-12T20:45:00.625043996Z" level=info msg="cleaning up dead shim" Feb 12 20:45:00.632533 systemd[1]: run-containerd-runc-k8s.io-367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6-runc.TbWxEn.mount: Deactivated successfully. Feb 12 20:45:00.632634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6-rootfs.mount: Deactivated successfully. Feb 12 20:45:00.635949 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:45:00.640218 env[1059]: time="2024-02-12T20:45:00.640156505Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:45:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2409 runtime=io.containerd.runc.v2\n" Feb 12 20:45:01.470644 env[1059]: time="2024-02-12T20:45:01.470548859Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:45:01.498684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421944261.mount: Deactivated successfully. Feb 12 20:45:01.504222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533078542.mount: Deactivated successfully. Feb 12 20:45:01.511924 env[1059]: time="2024-02-12T20:45:01.511874492Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\"" Feb 12 20:45:01.512642 env[1059]: time="2024-02-12T20:45:01.512602069Z" level=info msg="StartContainer for \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\"" Feb 12 20:45:01.533985 systemd[1]: Started cri-containerd-e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72.scope. Feb 12 20:45:01.574667 systemd[1]: cri-containerd-e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72.scope: Deactivated successfully. Feb 12 20:45:01.575370 env[1059]: time="2024-02-12T20:45:01.575127938Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc860114b_c7ba_474e_9cda_721b1119a31a.slice/cri-containerd-e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72.scope/memory.events\": no such file or directory" Feb 12 20:45:01.581230 env[1059]: time="2024-02-12T20:45:01.581196539Z" level=info msg="StartContainer for \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\" returns successfully" Feb 12 20:45:01.616875 env[1059]: time="2024-02-12T20:45:01.616823116Z" level=info msg="shim disconnected" id=e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72 Feb 12 20:45:01.617183 env[1059]: time="2024-02-12T20:45:01.617162030Z" level=warning msg="cleaning up after shim disconnected" id=e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72 namespace=k8s.io Feb 12 20:45:01.617256 env[1059]: time="2024-02-12T20:45:01.617241707Z" level=info msg="cleaning up dead shim" Feb 12 20:45:01.626744 env[1059]: time="2024-02-12T20:45:01.626658464Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:45:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2471 runtime=io.containerd.runc.v2\n" Feb 12 20:45:02.398344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446837766.mount: Deactivated successfully. Feb 12 20:45:02.469630 env[1059]: time="2024-02-12T20:45:02.469534586Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:45:02.492861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362978252.mount: Deactivated successfully. Feb 12 20:45:02.506932 env[1059]: time="2024-02-12T20:45:02.506895614Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\"" Feb 12 20:45:02.508993 env[1059]: time="2024-02-12T20:45:02.508964889Z" level=info msg="StartContainer for \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\"" Feb 12 20:45:02.539022 systemd[1]: Started cri-containerd-21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c.scope. Feb 12 20:45:02.582883 systemd[1]: cri-containerd-21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c.scope: Deactivated successfully. Feb 12 20:45:02.588818 env[1059]: time="2024-02-12T20:45:02.588639731Z" level=info msg="StartContainer for \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\" returns successfully" Feb 12 20:45:02.589067 env[1059]: time="2024-02-12T20:45:02.584633810Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc860114b_c7ba_474e_9cda_721b1119a31a.slice/cri-containerd-21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c.scope/memory.events\": no such file or directory" Feb 12 20:45:02.649046 env[1059]: time="2024-02-12T20:45:02.648947689Z" level=info msg="shim disconnected" id=21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c Feb 12 20:45:02.649046 env[1059]: time="2024-02-12T20:45:02.648993919Z" level=warning msg="cleaning up after shim disconnected" id=21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c namespace=k8s.io Feb 12 20:45:02.649046 env[1059]: time="2024-02-12T20:45:02.649004801Z" level=info msg="cleaning up dead shim" Feb 12 20:45:02.661510 env[1059]: time="2024-02-12T20:45:02.661469187Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:45:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2528 runtime=io.containerd.runc.v2\n" Feb 12 20:45:03.501177 env[1059]: time="2024-02-12T20:45:03.501089397Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:45:03.536969 env[1059]: time="2024-02-12T20:45:03.536920882Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:45:03.540842 env[1059]: time="2024-02-12T20:45:03.540692107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:45:03.547810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587166535.mount: Deactivated successfully. Feb 12 20:45:03.549322 env[1059]: time="2024-02-12T20:45:03.549290197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:45:03.549645 env[1059]: time="2024-02-12T20:45:03.549618569Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:45:03.555481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119295335.mount: Deactivated successfully. Feb 12 20:45:03.561367 env[1059]: time="2024-02-12T20:45:03.561314887Z" level=info msg="CreateContainer within sandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:45:03.571120 env[1059]: time="2024-02-12T20:45:03.571075603Z" level=info msg="CreateContainer within sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\"" Feb 12 20:45:03.573897 env[1059]: time="2024-02-12T20:45:03.573853894Z" level=info msg="StartContainer for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\"" Feb 12 20:45:03.599754 systemd[1]: Started cri-containerd-ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900.scope. Feb 12 20:45:04.070751 env[1059]: time="2024-02-12T20:45:04.069066180Z" level=info msg="CreateContainer within sandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\"" Feb 12 20:45:04.073084 env[1059]: time="2024-02-12T20:45:04.073014488Z" level=info msg="StartContainer for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\"" Feb 12 20:45:04.097033 env[1059]: time="2024-02-12T20:45:04.094382000Z" level=info msg="StartContainer for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" returns successfully" Feb 12 20:45:04.133758 systemd[1]: run-containerd-runc-k8s.io-f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e-runc.800Wco.mount: Deactivated successfully. Feb 12 20:45:04.147478 systemd[1]: Started cri-containerd-f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e.scope. Feb 12 20:45:04.234982 env[1059]: time="2024-02-12T20:45:04.234900494Z" level=info msg="StartContainer for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" returns successfully" Feb 12 20:45:04.279362 kubelet[1937]: I0212 20:45:04.278494 1937 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:45:04.370637 kubelet[1937]: I0212 20:45:04.370512 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:45:04.381942 systemd[1]: Created slice kubepods-burstable-pod08515373_381c_4ca1_a8fd_b6e1bdb41432.slice. Feb 12 20:45:04.396914 kubelet[1937]: I0212 20:45:04.396852 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:45:04.402936 systemd[1]: Created slice kubepods-burstable-pod16e72d38_acf8_4f26_acf0_b38ddfa59043.slice. Feb 12 20:45:04.444225 kubelet[1937]: I0212 20:45:04.443977 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e72d38-acf8-4f26-acf0-b38ddfa59043-config-volume\") pod \"coredns-787d4945fb-jzbcs\" (UID: \"16e72d38-acf8-4f26-acf0-b38ddfa59043\") " pod="kube-system/coredns-787d4945fb-jzbcs" Feb 12 20:45:04.444569 kubelet[1937]: I0212 20:45:04.444319 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08515373-381c-4ca1-a8fd-b6e1bdb41432-config-volume\") pod \"coredns-787d4945fb-46r8r\" (UID: \"08515373-381c-4ca1-a8fd-b6e1bdb41432\") " pod="kube-system/coredns-787d4945fb-46r8r" Feb 12 20:45:04.444569 kubelet[1937]: I0212 20:45:04.444465 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9pkc\" (UniqueName: \"kubernetes.io/projected/08515373-381c-4ca1-a8fd-b6e1bdb41432-kube-api-access-k9pkc\") pod \"coredns-787d4945fb-46r8r\" (UID: \"08515373-381c-4ca1-a8fd-b6e1bdb41432\") " pod="kube-system/coredns-787d4945fb-46r8r" Feb 12 20:45:04.444569 kubelet[1937]: I0212 20:45:04.444562 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzsb9\" (UniqueName: \"kubernetes.io/projected/16e72d38-acf8-4f26-acf0-b38ddfa59043-kube-api-access-pzsb9\") pod \"coredns-787d4945fb-jzbcs\" (UID: \"16e72d38-acf8-4f26-acf0-b38ddfa59043\") " pod="kube-system/coredns-787d4945fb-jzbcs" Feb 12 20:45:04.754766 kubelet[1937]: I0212 20:45:04.754589 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v557h" podStartSLOduration=-9.223372018100248e+09 pod.CreationTimestamp="2024-02-12 20:44:46 +0000 UTC" firstStartedPulling="2024-02-12 20:44:47.333241549 +0000 UTC m=+13.317904963" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:45:04.52674717 +0000 UTC m=+30.511410584" watchObservedRunningTime="2024-02-12 20:45:04.75452781 +0000 UTC m=+30.739191234" Feb 12 20:45:04.755025 kubelet[1937]: I0212 20:45:04.754891 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-q7wnl" podStartSLOduration=-9.223372018099913e+09 pod.CreationTimestamp="2024-02-12 20:44:46 +0000 UTC" firstStartedPulling="2024-02-12 20:44:47.809522367 +0000 UTC m=+13.794185781" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:45:04.751915828 +0000 UTC m=+30.736579252" watchObservedRunningTime="2024-02-12 20:45:04.754863707 +0000 UTC m=+30.739527161" Feb 12 20:45:04.990759 env[1059]: time="2024-02-12T20:45:04.990122575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-46r8r,Uid:08515373-381c-4ca1-a8fd-b6e1bdb41432,Namespace:kube-system,Attempt:0,}" Feb 12 20:45:05.007396 env[1059]: time="2024-02-12T20:45:05.007240594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jzbcs,Uid:16e72d38-acf8-4f26-acf0-b38ddfa59043,Namespace:kube-system,Attempt:0,}" Feb 12 20:45:08.204937 systemd-networkd[980]: cilium_host: Link UP Feb 12 20:45:08.211669 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:45:08.211910 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:45:08.213653 systemd-networkd[980]: cilium_net: Link UP Feb 12 20:45:08.215887 systemd-networkd[980]: cilium_net: Gained carrier Feb 12 20:45:08.216116 systemd-networkd[980]: cilium_host: Gained carrier Feb 12 20:45:08.375571 systemd-networkd[980]: cilium_vxlan: Link UP Feb 12 20:45:08.375671 systemd-networkd[980]: cilium_vxlan: Gained carrier Feb 12 20:45:09.022162 systemd-networkd[980]: cilium_net: Gained IPv6LL Feb 12 20:45:09.150119 systemd-networkd[980]: cilium_host: Gained IPv6LL Feb 12 20:45:09.283470 kernel: NET: Registered PF_ALG protocol family Feb 12 20:45:10.305851 systemd-networkd[980]: cilium_vxlan: Gained IPv6LL Feb 12 20:45:10.315482 systemd-networkd[980]: lxc_health: Link UP Feb 12 20:45:10.346879 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:45:10.347317 systemd-networkd[980]: lxc_health: Gained carrier Feb 12 20:45:10.590085 systemd-networkd[980]: lxcda223569e117: Link UP Feb 12 20:45:10.606862 kernel: eth0: renamed from tmp70a6e Feb 12 20:45:10.611987 systemd-networkd[980]: lxcf8f59fe31eda: Link UP Feb 12 20:45:10.616868 kernel: eth0: renamed from tmp7e208 Feb 12 20:45:10.630444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcda223569e117: link becomes ready Feb 12 20:45:10.629350 systemd-networkd[980]: lxcda223569e117: Gained carrier Feb 12 20:45:10.640953 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf8f59fe31eda: link becomes ready Feb 12 20:45:10.636336 systemd-networkd[980]: lxcf8f59fe31eda: Gained carrier Feb 12 20:45:11.710015 systemd-networkd[980]: lxcda223569e117: Gained IPv6LL Feb 12 20:45:12.030326 systemd-networkd[980]: lxcf8f59fe31eda: Gained IPv6LL Feb 12 20:45:12.094071 systemd-networkd[980]: lxc_health: Gained IPv6LL Feb 12 20:45:15.559150 env[1059]: time="2024-02-12T20:45:15.559040305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:45:15.559745 env[1059]: time="2024-02-12T20:45:15.559100471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:45:15.559745 env[1059]: time="2024-02-12T20:45:15.559136360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:45:15.559975 env[1059]: time="2024-02-12T20:45:15.559933570Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70a6ef5cfc895feb49d6df10a9dda03492ee24077c42ac08b61afd920b8850f1 pid=3108 runtime=io.containerd.runc.v2 Feb 12 20:45:15.597666 systemd[1]: Started cri-containerd-70a6ef5cfc895feb49d6df10a9dda03492ee24077c42ac08b61afd920b8850f1.scope. Feb 12 20:45:15.617838 env[1059]: time="2024-02-12T20:45:15.617759053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:45:15.617980 env[1059]: time="2024-02-12T20:45:15.617857714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:45:15.617980 env[1059]: time="2024-02-12T20:45:15.617898582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:45:15.618146 env[1059]: time="2024-02-12T20:45:15.618099089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e2081e2d766b0e1230480e26692580c643db0bc5cb6785ebd0653a8bf8235ed pid=3139 runtime=io.containerd.runc.v2 Feb 12 20:45:15.651001 systemd[1]: Started cri-containerd-7e2081e2d766b0e1230480e26692580c643db0bc5cb6785ebd0653a8bf8235ed.scope. Feb 12 20:45:15.683443 env[1059]: time="2024-02-12T20:45:15.683338304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-46r8r,Uid:08515373-381c-4ca1-a8fd-b6e1bdb41432,Namespace:kube-system,Attempt:0,} returns sandbox id \"70a6ef5cfc895feb49d6df10a9dda03492ee24077c42ac08b61afd920b8850f1\"" Feb 12 20:45:15.689142 env[1059]: time="2024-02-12T20:45:15.689105703Z" level=info msg="CreateContainer within sandbox \"70a6ef5cfc895feb49d6df10a9dda03492ee24077c42ac08b61afd920b8850f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:45:15.722470 env[1059]: time="2024-02-12T20:45:15.722403731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-jzbcs,Uid:16e72d38-acf8-4f26-acf0-b38ddfa59043,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e2081e2d766b0e1230480e26692580c643db0bc5cb6785ebd0653a8bf8235ed\"" Feb 12 20:45:15.727983 env[1059]: time="2024-02-12T20:45:15.727934433Z" level=info msg="CreateContainer within sandbox \"7e2081e2d766b0e1230480e26692580c643db0bc5cb6785ebd0653a8bf8235ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:45:15.989541 env[1059]: time="2024-02-12T20:45:15.989334071Z" level=info msg="CreateContainer within sandbox \"7e2081e2d766b0e1230480e26692580c643db0bc5cb6785ebd0653a8bf8235ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba7ff1076aa4caa02309eec0066519f1dd07e03aea551aed140993392dab9f9c\"" Feb 12 20:45:15.992388 env[1059]: time="2024-02-12T20:45:15.992305191Z" level=info msg="StartContainer for \"ba7ff1076aa4caa02309eec0066519f1dd07e03aea551aed140993392dab9f9c\"" Feb 12 20:45:15.996566 env[1059]: time="2024-02-12T20:45:15.996483961Z" level=info msg="CreateContainer within sandbox \"70a6ef5cfc895feb49d6df10a9dda03492ee24077c42ac08b61afd920b8850f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b8e782f9cd57d0058d0c095e512b46374d8fcf8da25fa7ec13777e001b9ab80\"" Feb 12 20:45:16.000561 env[1059]: time="2024-02-12T20:45:16.000503025Z" level=info msg="StartContainer for \"2b8e782f9cd57d0058d0c095e512b46374d8fcf8da25fa7ec13777e001b9ab80\"" Feb 12 20:45:16.046886 systemd[1]: Started cri-containerd-2b8e782f9cd57d0058d0c095e512b46374d8fcf8da25fa7ec13777e001b9ab80.scope. Feb 12 20:45:16.059181 systemd[1]: Started cri-containerd-ba7ff1076aa4caa02309eec0066519f1dd07e03aea551aed140993392dab9f9c.scope. Feb 12 20:45:16.122116 env[1059]: time="2024-02-12T20:45:16.122046681Z" level=info msg="StartContainer for \"ba7ff1076aa4caa02309eec0066519f1dd07e03aea551aed140993392dab9f9c\" returns successfully" Feb 12 20:45:16.122767 env[1059]: time="2024-02-12T20:45:16.122049226Z" level=info msg="StartContainer for \"2b8e782f9cd57d0058d0c095e512b46374d8fcf8da25fa7ec13777e001b9ab80\" returns successfully" Feb 12 20:45:16.569096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669570730.mount: Deactivated successfully. Feb 12 20:45:16.581212 kubelet[1937]: I0212 20:45:16.581172 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-jzbcs" podStartSLOduration=29.581124131 pod.CreationTimestamp="2024-02-12 20:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:45:16.566487258 +0000 UTC m=+42.551150692" watchObservedRunningTime="2024-02-12 20:45:16.581124131 +0000 UTC m=+42.565787545" Feb 12 20:45:17.568938 kubelet[1937]: I0212 20:45:17.568858 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-46r8r" podStartSLOduration=31.568687163 pod.CreationTimestamp="2024-02-12 20:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:45:16.582033298 +0000 UTC m=+42.566696723" watchObservedRunningTime="2024-02-12 20:45:17.568687163 +0000 UTC m=+43.553350627" Feb 12 20:45:50.444341 systemd[1]: Started sshd@5-172.24.4.230:22-172.24.4.1:35334.service. Feb 12 20:45:51.680349 sshd[3349]: Accepted publickey for core from 172.24.4.1 port 35334 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:45:51.685430 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:45:51.698238 systemd-logind[1050]: New session 6 of user core. Feb 12 20:45:51.699954 systemd[1]: Started session-6.scope. Feb 12 20:45:52.734508 sshd[3349]: pam_unix(sshd:session): session closed for user core Feb 12 20:45:52.741344 systemd-logind[1050]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:45:52.743035 systemd[1]: sshd@5-172.24.4.230:22-172.24.4.1:35334.service: Deactivated successfully. Feb 12 20:45:52.744745 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:45:52.747032 systemd-logind[1050]: Removed session 6. Feb 12 20:45:57.746308 systemd[1]: Started sshd@6-172.24.4.230:22-172.24.4.1:37896.service. Feb 12 20:45:59.146864 sshd[3362]: Accepted publickey for core from 172.24.4.1 port 37896 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:45:59.149665 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:45:59.165201 systemd-logind[1050]: New session 7 of user core. Feb 12 20:45:59.169267 systemd[1]: Started session-7.scope. Feb 12 20:45:59.978391 sshd[3362]: pam_unix(sshd:session): session closed for user core Feb 12 20:45:59.983533 systemd[1]: sshd@6-172.24.4.230:22-172.24.4.1:37896.service: Deactivated successfully. Feb 12 20:45:59.985426 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:45:59.986822 systemd-logind[1050]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:45:59.989503 systemd-logind[1050]: Removed session 7. Feb 12 20:46:04.989563 systemd[1]: Started sshd@7-172.24.4.230:22-172.24.4.1:46828.service. Feb 12 20:46:06.426006 sshd[3374]: Accepted publickey for core from 172.24.4.1 port 46828 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:06.428943 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:06.439393 systemd-logind[1050]: New session 8 of user core. Feb 12 20:46:06.440324 systemd[1]: Started session-8.scope. Feb 12 20:46:07.193133 sshd[3374]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:07.198770 systemd[1]: sshd@7-172.24.4.230:22-172.24.4.1:46828.service: Deactivated successfully. Feb 12 20:46:07.200369 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:46:07.202073 systemd-logind[1050]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:46:07.204365 systemd-logind[1050]: Removed session 8. Feb 12 20:46:12.206359 systemd[1]: Started sshd@8-172.24.4.230:22-172.24.4.1:46844.service. Feb 12 20:46:13.441097 sshd[3387]: Accepted publickey for core from 172.24.4.1 port 46844 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:13.447297 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:13.461652 systemd-logind[1050]: New session 9 of user core. Feb 12 20:46:13.462571 systemd[1]: Started session-9.scope. Feb 12 20:46:14.162069 sshd[3387]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:14.176006 systemd[1]: sshd@8-172.24.4.230:22-172.24.4.1:46844.service: Deactivated successfully. Feb 12 20:46:14.177963 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:46:14.179701 systemd-logind[1050]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:46:14.182648 systemd-logind[1050]: Removed session 9. Feb 12 20:46:19.169295 systemd[1]: Started sshd@9-172.24.4.230:22-172.24.4.1:38202.service. Feb 12 20:46:20.675503 sshd[3403]: Accepted publickey for core from 172.24.4.1 port 38202 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:20.678648 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:20.689498 systemd-logind[1050]: New session 10 of user core. Feb 12 20:46:20.691927 systemd[1]: Started session-10.scope. Feb 12 20:46:21.447628 sshd[3403]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:21.456806 systemd[1]: Started sshd@10-172.24.4.230:22-172.24.4.1:38214.service. Feb 12 20:46:21.458267 systemd[1]: sshd@9-172.24.4.230:22-172.24.4.1:38202.service: Deactivated successfully. Feb 12 20:46:21.460903 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:46:21.465882 systemd-logind[1050]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:46:21.468789 systemd-logind[1050]: Removed session 10. Feb 12 20:46:23.009269 sshd[3414]: Accepted publickey for core from 172.24.4.1 port 38214 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:23.010980 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:23.023233 systemd[1]: Started session-11.scope. Feb 12 20:46:23.024106 systemd-logind[1050]: New session 11 of user core. Feb 12 20:46:25.011356 sshd[3414]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:25.023435 systemd[1]: Started sshd@11-172.24.4.230:22-172.24.4.1:40252.service. Feb 12 20:46:25.039081 systemd[1]: sshd@10-172.24.4.230:22-172.24.4.1:38214.service: Deactivated successfully. Feb 12 20:46:25.044475 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:46:25.050850 systemd-logind[1050]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:46:25.053521 systemd-logind[1050]: Removed session 11. Feb 12 20:46:26.441880 sshd[3424]: Accepted publickey for core from 172.24.4.1 port 40252 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:26.444771 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:26.455915 systemd-logind[1050]: New session 12 of user core. Feb 12 20:46:26.456911 systemd[1]: Started session-12.scope. Feb 12 20:46:27.192147 sshd[3424]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:27.196837 systemd[1]: sshd@11-172.24.4.230:22-172.24.4.1:40252.service: Deactivated successfully. Feb 12 20:46:27.198622 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:46:27.200066 systemd-logind[1050]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:46:27.202540 systemd-logind[1050]: Removed session 12. Feb 12 20:46:32.203002 systemd[1]: Started sshd@12-172.24.4.230:22-172.24.4.1:40262.service. Feb 12 20:46:33.734157 sshd[3436]: Accepted publickey for core from 172.24.4.1 port 40262 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:33.737495 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:33.749987 systemd-logind[1050]: New session 13 of user core. Feb 12 20:46:33.754816 systemd[1]: Started session-13.scope. Feb 12 20:46:34.592234 sshd[3436]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:34.599668 systemd[1]: sshd@12-172.24.4.230:22-172.24.4.1:40262.service: Deactivated successfully. Feb 12 20:46:34.602617 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:46:34.607642 systemd-logind[1050]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:46:34.609211 systemd[1]: Started sshd@13-172.24.4.230:22-172.24.4.1:39442.service. Feb 12 20:46:34.613395 systemd-logind[1050]: Removed session 13. Feb 12 20:46:35.912989 sshd[3450]: Accepted publickey for core from 172.24.4.1 port 39442 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:35.916197 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:35.927268 systemd-logind[1050]: New session 14 of user core. Feb 12 20:46:35.928236 systemd[1]: Started session-14.scope. Feb 12 20:46:37.222292 sshd[3450]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:37.230203 systemd[1]: Started sshd@14-172.24.4.230:22-172.24.4.1:39450.service. Feb 12 20:46:37.233410 systemd[1]: sshd@13-172.24.4.230:22-172.24.4.1:39442.service: Deactivated successfully. Feb 12 20:46:37.235298 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:46:37.238067 systemd-logind[1050]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:46:37.242138 systemd-logind[1050]: Removed session 14. Feb 12 20:46:38.627777 sshd[3458]: Accepted publickey for core from 172.24.4.1 port 39450 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:38.630571 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:38.641953 systemd[1]: Started session-15.scope. Feb 12 20:46:38.642881 systemd-logind[1050]: New session 15 of user core. Feb 12 20:46:40.812298 sshd[3458]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:40.824965 systemd[1]: Started sshd@15-172.24.4.230:22-172.24.4.1:39466.service. Feb 12 20:46:40.839364 systemd[1]: sshd@14-172.24.4.230:22-172.24.4.1:39450.service: Deactivated successfully. Feb 12 20:46:40.841269 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:46:40.847576 systemd-logind[1050]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:46:40.853825 systemd-logind[1050]: Removed session 15. Feb 12 20:46:42.112199 sshd[3528]: Accepted publickey for core from 172.24.4.1 port 39466 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:42.115980 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:42.127938 systemd-logind[1050]: New session 16 of user core. Feb 12 20:46:42.129344 systemd[1]: Started session-16.scope. Feb 12 20:46:43.366072 sshd[3528]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:43.372792 systemd[1]: sshd@15-172.24.4.230:22-172.24.4.1:39466.service: Deactivated successfully. Feb 12 20:46:43.374689 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:46:43.380504 systemd[1]: Started sshd@16-172.24.4.230:22-172.24.4.1:39476.service. Feb 12 20:46:43.382577 systemd-logind[1050]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:46:43.387480 systemd-logind[1050]: Removed session 16. Feb 12 20:46:44.835136 sshd[3539]: Accepted publickey for core from 172.24.4.1 port 39476 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:44.838433 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:44.852188 systemd[1]: Started session-17.scope. Feb 12 20:46:44.853621 systemd-logind[1050]: New session 17 of user core. Feb 12 20:46:45.691698 sshd[3539]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:45.698978 systemd-logind[1050]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:46:45.699677 systemd[1]: sshd@16-172.24.4.230:22-172.24.4.1:39476.service: Deactivated successfully. Feb 12 20:46:45.701419 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:46:45.703262 systemd-logind[1050]: Removed session 17. Feb 12 20:46:50.705241 systemd[1]: Started sshd@17-172.24.4.230:22-172.24.4.1:60640.service. Feb 12 20:46:52.519092 sshd[3582]: Accepted publickey for core from 172.24.4.1 port 60640 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:52.521651 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:52.532855 systemd-logind[1050]: New session 18 of user core. Feb 12 20:46:52.533028 systemd[1]: Started session-18.scope. Feb 12 20:46:53.278044 sshd[3582]: pam_unix(sshd:session): session closed for user core Feb 12 20:46:53.283862 systemd[1]: sshd@17-172.24.4.230:22-172.24.4.1:60640.service: Deactivated successfully. Feb 12 20:46:53.285656 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:46:53.287527 systemd-logind[1050]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:46:53.290358 systemd-logind[1050]: Removed session 18. Feb 12 20:46:58.290918 systemd[1]: Started sshd@18-172.24.4.230:22-172.24.4.1:41644.service. Feb 12 20:46:59.748786 sshd[3595]: Accepted publickey for core from 172.24.4.1 port 41644 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:46:59.752860 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:46:59.768256 systemd-logind[1050]: New session 19 of user core. Feb 12 20:46:59.770285 systemd[1]: Started session-19.scope. Feb 12 20:47:00.399509 sshd[3595]: pam_unix(sshd:session): session closed for user core Feb 12 20:47:00.405347 systemd[1]: sshd@18-172.24.4.230:22-172.24.4.1:41644.service: Deactivated successfully. Feb 12 20:47:00.407824 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:47:00.410578 systemd-logind[1050]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:47:00.413167 systemd-logind[1050]: Removed session 19. Feb 12 20:47:05.413225 systemd[1]: Started sshd@19-172.24.4.230:22-172.24.4.1:59636.service. Feb 12 20:47:07.109538 sshd[3607]: Accepted publickey for core from 172.24.4.1 port 59636 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:47:07.111903 sshd[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:47:07.122383 systemd[1]: Started session-20.scope. Feb 12 20:47:07.123227 systemd-logind[1050]: New session 20 of user core. Feb 12 20:47:07.925552 sshd[3607]: pam_unix(sshd:session): session closed for user core Feb 12 20:47:07.934579 systemd[1]: Started sshd@20-172.24.4.230:22-172.24.4.1:59644.service. Feb 12 20:47:07.940539 systemd[1]: sshd@19-172.24.4.230:22-172.24.4.1:59636.service: Deactivated successfully. Feb 12 20:47:07.942082 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:47:07.943552 systemd-logind[1050]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:47:07.945686 systemd-logind[1050]: Removed session 20. Feb 12 20:47:09.486424 sshd[3618]: Accepted publickey for core from 172.24.4.1 port 59644 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:47:09.489698 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:47:09.496481 systemd[1]: Started session-21.scope. Feb 12 20:47:09.496845 systemd-logind[1050]: New session 21 of user core. Feb 12 20:47:12.127197 env[1059]: time="2024-02-12T20:47:12.127128971Z" level=info msg="StopContainer for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" with timeout 30 (s)" Feb 12 20:47:12.134532 env[1059]: time="2024-02-12T20:47:12.134479391Z" level=info msg="Stop container \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" with signal terminated" Feb 12 20:47:12.153089 systemd[1]: cri-containerd-f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e.scope: Deactivated successfully. Feb 12 20:47:12.159557 env[1059]: time="2024-02-12T20:47:12.159455120Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:47:12.177938 env[1059]: time="2024-02-12T20:47:12.177897278Z" level=info msg="StopContainer for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" with timeout 1 (s)" Feb 12 20:47:12.178549 env[1059]: time="2024-02-12T20:47:12.178525965Z" level=info msg="Stop container \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" with signal terminated" Feb 12 20:47:12.191538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e-rootfs.mount: Deactivated successfully. Feb 12 20:47:12.193063 systemd-networkd[980]: lxc_health: Link DOWN Feb 12 20:47:12.193067 systemd-networkd[980]: lxc_health: Lost carrier Feb 12 20:47:12.201749 env[1059]: time="2024-02-12T20:47:12.201664347Z" level=info msg="shim disconnected" id=f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e Feb 12 20:47:12.202569 env[1059]: time="2024-02-12T20:47:12.202543064Z" level=warning msg="cleaning up after shim disconnected" id=f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e namespace=k8s.io Feb 12 20:47:12.202665 env[1059]: time="2024-02-12T20:47:12.202649269Z" level=info msg="cleaning up dead shim" Feb 12 20:47:12.211885 env[1059]: time="2024-02-12T20:47:12.211834399Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3668 runtime=io.containerd.runc.v2\n" Feb 12 20:47:12.227185 env[1059]: time="2024-02-12T20:47:12.227114278Z" level=info msg="StopContainer for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" returns successfully" Feb 12 20:47:12.228358 systemd[1]: cri-containerd-ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900.scope: Deactivated successfully. Feb 12 20:47:12.228616 systemd[1]: cri-containerd-ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900.scope: Consumed 9.775s CPU time. Feb 12 20:47:12.231217 env[1059]: time="2024-02-12T20:47:12.231173348Z" level=info msg="StopPodSandbox for \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\"" Feb 12 20:47:12.231558 env[1059]: time="2024-02-12T20:47:12.231536004Z" level=info msg="Container to stop \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:12.234700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24-shm.mount: Deactivated successfully. Feb 12 20:47:12.244883 systemd[1]: cri-containerd-35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24.scope: Deactivated successfully. Feb 12 20:47:12.261880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900-rootfs.mount: Deactivated successfully. Feb 12 20:47:12.272089 env[1059]: time="2024-02-12T20:47:12.272035841Z" level=info msg="shim disconnected" id=ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900 Feb 12 20:47:12.272421 env[1059]: time="2024-02-12T20:47:12.272390883Z" level=warning msg="cleaning up after shim disconnected" id=ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900 namespace=k8s.io Feb 12 20:47:12.272503 env[1059]: time="2024-02-12T20:47:12.272487959Z" level=info msg="cleaning up dead shim" Feb 12 20:47:12.295550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24-rootfs.mount: Deactivated successfully. Feb 12 20:47:12.297981 env[1059]: time="2024-02-12T20:47:12.297931496Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" Feb 12 20:47:12.301928 env[1059]: time="2024-02-12T20:47:12.301879615Z" level=info msg="shim disconnected" id=35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24 Feb 12 20:47:12.302420 env[1059]: time="2024-02-12T20:47:12.302399271Z" level=warning msg="cleaning up after shim disconnected" id=35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24 namespace=k8s.io Feb 12 20:47:12.302521 env[1059]: time="2024-02-12T20:47:12.302504804Z" level=info msg="cleaning up dead shim" Feb 12 20:47:12.302966 env[1059]: time="2024-02-12T20:47:12.302939418Z" level=info msg="StopContainer for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" returns successfully" Feb 12 20:47:12.303862 env[1059]: time="2024-02-12T20:47:12.303836811Z" level=info msg="StopPodSandbox for \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\"" Feb 12 20:47:12.304188 env[1059]: time="2024-02-12T20:47:12.304153990Z" level=info msg="Container to stop \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:12.304286 env[1059]: time="2024-02-12T20:47:12.304266035Z" level=info msg="Container to stop \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:12.304415 env[1059]: time="2024-02-12T20:47:12.304395523Z" level=info msg="Container to stop \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:12.304514 env[1059]: time="2024-02-12T20:47:12.304495215Z" level=info msg="Container to stop \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:12.304617 env[1059]: time="2024-02-12T20:47:12.304597962Z" level=info msg="Container to stop \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:12.315105 systemd[1]: cri-containerd-1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda.scope: Deactivated successfully. Feb 12 20:47:12.316887 env[1059]: time="2024-02-12T20:47:12.316704540Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3728 runtime=io.containerd.runc.v2\n" Feb 12 20:47:12.317557 env[1059]: time="2024-02-12T20:47:12.317525115Z" level=info msg="TearDown network for sandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" successfully" Feb 12 20:47:12.317692 env[1059]: time="2024-02-12T20:47:12.317651808Z" level=info msg="StopPodSandbox for \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" returns successfully" Feb 12 20:47:12.346755 kubelet[1937]: I0212 20:47:12.346518 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-cilium-config-path\") pod \"a43228a8-2bb8-42cd-b7d0-fb83db9c1926\" (UID: \"a43228a8-2bb8-42cd-b7d0-fb83db9c1926\") " Feb 12 20:47:12.346755 kubelet[1937]: I0212 20:47:12.346637 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22gxm\" (UniqueName: \"kubernetes.io/projected/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-kube-api-access-22gxm\") pod \"a43228a8-2bb8-42cd-b7d0-fb83db9c1926\" (UID: \"a43228a8-2bb8-42cd-b7d0-fb83db9c1926\") " Feb 12 20:47:12.347299 kubelet[1937]: W0212 20:47:12.346518 1937 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a43228a8-2bb8-42cd-b7d0-fb83db9c1926/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:47:12.352774 kubelet[1937]: I0212 20:47:12.349632 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a43228a8-2bb8-42cd-b7d0-fb83db9c1926" (UID: "a43228a8-2bb8-42cd-b7d0-fb83db9c1926"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:47:12.356905 kubelet[1937]: I0212 20:47:12.356854 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-kube-api-access-22gxm" (OuterVolumeSpecName: "kube-api-access-22gxm") pod "a43228a8-2bb8-42cd-b7d0-fb83db9c1926" (UID: "a43228a8-2bb8-42cd-b7d0-fb83db9c1926"). InnerVolumeSpecName "kube-api-access-22gxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:47:12.366627 env[1059]: time="2024-02-12T20:47:12.365830905Z" level=info msg="shim disconnected" id=1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda Feb 12 20:47:12.366627 env[1059]: time="2024-02-12T20:47:12.365897482Z" level=warning msg="cleaning up after shim disconnected" id=1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda namespace=k8s.io Feb 12 20:47:12.366627 env[1059]: time="2024-02-12T20:47:12.365909566Z" level=info msg="cleaning up dead shim" Feb 12 20:47:12.376589 env[1059]: time="2024-02-12T20:47:12.376530402Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3759 runtime=io.containerd.runc.v2\n" Feb 12 20:47:12.376886 env[1059]: time="2024-02-12T20:47:12.376855496Z" level=info msg="TearDown network for sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" successfully" Feb 12 20:47:12.376933 env[1059]: time="2024-02-12T20:47:12.376885303Z" level=info msg="StopPodSandbox for \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" returns successfully" Feb 12 20:47:12.447792 kubelet[1937]: I0212 20:47:12.447545 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cni-path" (OuterVolumeSpecName: "cni-path") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.449029 kubelet[1937]: I0212 20:47:12.448922 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cni-path\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.450599 kubelet[1937]: I0212 20:47:12.450564 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c860114b-c7ba-474e-9cda-721b1119a31a-clustermesh-secrets\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.451119 kubelet[1937]: I0212 20:47:12.451093 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-cgroup\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.451535 kubelet[1937]: I0212 20:47:12.451334 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.451907 kubelet[1937]: I0212 20:47:12.451829 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gztf7\" (UniqueName: \"kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-kube-api-access-gztf7\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.452230 kubelet[1937]: I0212 20:47:12.452202 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-kernel\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.452555 kubelet[1937]: I0212 20:47:12.452528 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-xtables-lock\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.452949 kubelet[1937]: I0212 20:47:12.452922 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-hostproc\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.453666 kubelet[1937]: I0212 20:47:12.453295 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c860114b-c7ba-474e-9cda-721b1119a31a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:47:12.453957 kubelet[1937]: I0212 20:47:12.453388 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.454163 kubelet[1937]: I0212 20:47:12.453437 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.454322 kubelet[1937]: I0212 20:47:12.453478 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-hostproc" (OuterVolumeSpecName: "hostproc") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.454676 kubelet[1937]: I0212 20:47:12.454634 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-net\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.455015 kubelet[1937]: I0212 20:47:12.454960 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-lib-modules\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.455274 kubelet[1937]: I0212 20:47:12.455244 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-run\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.455497 kubelet[1937]: I0212 20:47:12.455472 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-bpf-maps\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.455803 kubelet[1937]: I0212 20:47:12.455693 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-hubble-tls\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.456090 kubelet[1937]: I0212 20:47:12.456060 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-config-path\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.456318 kubelet[1937]: I0212 20:47:12.456293 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-etc-cni-netd\") pod \"c860114b-c7ba-474e-9cda-721b1119a31a\" (UID: \"c860114b-c7ba-474e-9cda-721b1119a31a\") " Feb 12 20:47:12.456582 kubelet[1937]: I0212 20:47:12.456505 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.456582 kubelet[1937]: I0212 20:47:12.454825 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-kube-api-access-gztf7" (OuterVolumeSpecName: "kube-api-access-gztf7") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "kube-api-access-gztf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:47:12.456582 kubelet[1937]: I0212 20:47:12.456547 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.457439 kubelet[1937]: W0212 20:47:12.457373 1937 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c860114b-c7ba-474e-9cda-721b1119a31a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:47:12.459613 kubelet[1937]: I0212 20:47:12.459549 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.459613 kubelet[1937]: I0212 20:47:12.459585 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.460363 kubelet[1937]: I0212 20:47:12.460306 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-cilium-config-path\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460363 kubelet[1937]: I0212 20:47:12.460336 1937 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cni-path\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460363 kubelet[1937]: I0212 20:47:12.460353 1937 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c860114b-c7ba-474e-9cda-721b1119a31a-clustermesh-secrets\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460363 kubelet[1937]: I0212 20:47:12.460372 1937 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-22gxm\" (UniqueName: \"kubernetes.io/projected/a43228a8-2bb8-42cd-b7d0-fb83db9c1926-kube-api-access-22gxm\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460387 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-cgroup\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460401 1937 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-xtables-lock\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460417 1937 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-gztf7\" (UniqueName: \"kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-kube-api-access-gztf7\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460431 1937 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-kernel\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460443 1937 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-lib-modules\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460456 1937 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-hostproc\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.460940 kubelet[1937]: I0212 20:47:12.460468 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-run\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.461425 kubelet[1937]: I0212 20:47:12.460481 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:12.461425 kubelet[1937]: I0212 20:47:12.460575 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:47:12.468119 kubelet[1937]: I0212 20:47:12.468067 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c860114b-c7ba-474e-9cda-721b1119a31a" (UID: "c860114b-c7ba-474e-9cda-721b1119a31a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:47:12.561550 kubelet[1937]: I0212 20:47:12.561415 1937 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-host-proc-sys-net\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.561550 kubelet[1937]: I0212 20:47:12.561539 1937 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-bpf-maps\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.562028 kubelet[1937]: I0212 20:47:12.561588 1937 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c860114b-c7ba-474e-9cda-721b1119a31a-hubble-tls\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.562028 kubelet[1937]: I0212 20:47:12.561622 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c860114b-c7ba-474e-9cda-721b1119a31a-cilium-config-path\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.562028 kubelet[1937]: I0212 20:47:12.561749 1937 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c860114b-c7ba-474e-9cda-721b1119a31a-etc-cni-netd\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:12.935845 kubelet[1937]: I0212 20:47:12.935790 1937 scope.go:115] "RemoveContainer" containerID="ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900" Feb 12 20:47:12.942472 env[1059]: time="2024-02-12T20:47:12.942117780Z" level=info msg="RemoveContainer for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\"" Feb 12 20:47:12.951307 env[1059]: time="2024-02-12T20:47:12.951223426Z" level=info msg="RemoveContainer for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" returns successfully" Feb 12 20:47:12.954442 kubelet[1937]: I0212 20:47:12.954407 1937 scope.go:115] "RemoveContainer" containerID="21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c" Feb 12 20:47:12.965923 systemd[1]: Removed slice kubepods-burstable-podc860114b_c7ba_474e_9cda_721b1119a31a.slice. Feb 12 20:47:12.966184 systemd[1]: kubepods-burstable-podc860114b_c7ba_474e_9cda_721b1119a31a.slice: Consumed 9.889s CPU time. Feb 12 20:47:12.974897 env[1059]: time="2024-02-12T20:47:12.974379924Z" level=info msg="RemoveContainer for \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\"" Feb 12 20:47:12.977365 systemd[1]: Removed slice kubepods-besteffort-poda43228a8_2bb8_42cd_b7d0_fb83db9c1926.slice. Feb 12 20:47:12.981916 env[1059]: time="2024-02-12T20:47:12.981854251Z" level=info msg="RemoveContainer for \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\" returns successfully" Feb 12 20:47:12.982662 kubelet[1937]: I0212 20:47:12.982625 1937 scope.go:115] "RemoveContainer" containerID="e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72" Feb 12 20:47:12.985445 env[1059]: time="2024-02-12T20:47:12.985320985Z" level=info msg="RemoveContainer for \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\"" Feb 12 20:47:12.990976 env[1059]: time="2024-02-12T20:47:12.990907427Z" level=info msg="RemoveContainer for \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\" returns successfully" Feb 12 20:47:12.991563 kubelet[1937]: I0212 20:47:12.991530 1937 scope.go:115] "RemoveContainer" containerID="7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95" Feb 12 20:47:12.994574 env[1059]: time="2024-02-12T20:47:12.994508811Z" level=info msg="RemoveContainer for \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\"" Feb 12 20:47:13.005040 env[1059]: time="2024-02-12T20:47:13.004967335Z" level=info msg="RemoveContainer for \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\" returns successfully" Feb 12 20:47:13.005771 kubelet[1937]: I0212 20:47:13.005673 1937 scope.go:115] "RemoveContainer" containerID="367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6" Feb 12 20:47:13.008996 env[1059]: time="2024-02-12T20:47:13.008940582Z" level=info msg="RemoveContainer for \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\"" Feb 12 20:47:13.022105 env[1059]: time="2024-02-12T20:47:13.022006001Z" level=info msg="RemoveContainer for \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\" returns successfully" Feb 12 20:47:13.022777 kubelet[1937]: I0212 20:47:13.022644 1937 scope.go:115] "RemoveContainer" containerID="ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900" Feb 12 20:47:13.023423 env[1059]: time="2024-02-12T20:47:13.023331747Z" level=error msg="ContainerStatus for \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\": not found" Feb 12 20:47:13.025418 kubelet[1937]: E0212 20:47:13.025384 1937 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\": not found" containerID="ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900" Feb 12 20:47:13.027799 kubelet[1937]: I0212 20:47:13.027767 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900} err="failed to get container status \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccec3b97876cb81837a3bb5504c888427f3b74f4cabfefe3e6b2e2732c210900\": not found" Feb 12 20:47:13.027799 kubelet[1937]: I0212 20:47:13.027802 1937 scope.go:115] "RemoveContainer" containerID="21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c" Feb 12 20:47:13.028118 env[1059]: time="2024-02-12T20:47:13.028061765Z" level=error msg="ContainerStatus for \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\": not found" Feb 12 20:47:13.028304 kubelet[1937]: E0212 20:47:13.028291 1937 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\": not found" containerID="21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c" Feb 12 20:47:13.028412 kubelet[1937]: I0212 20:47:13.028400 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c} err="failed to get container status \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\": rpc error: code = NotFound desc = an error occurred when try to find container \"21739bd6e6c1db76cfe52f50cc299dcc92f20dfeaaf4a45c2f784d3420f6715c\": not found" Feb 12 20:47:13.028487 kubelet[1937]: I0212 20:47:13.028477 1937 scope.go:115] "RemoveContainer" containerID="e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72" Feb 12 20:47:13.028770 env[1059]: time="2024-02-12T20:47:13.028688559Z" level=error msg="ContainerStatus for \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\": not found" Feb 12 20:47:13.028968 kubelet[1937]: E0212 20:47:13.028932 1937 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\": not found" containerID="e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72" Feb 12 20:47:13.029061 kubelet[1937]: I0212 20:47:13.029050 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72} err="failed to get container status \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\": rpc error: code = NotFound desc = an error occurred when try to find container \"e725e77504fc525fc9087aa39c1bf0a20a8020c60075b8693489d2f64c859a72\": not found" Feb 12 20:47:13.029129 kubelet[1937]: I0212 20:47:13.029119 1937 scope.go:115] "RemoveContainer" containerID="7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95" Feb 12 20:47:13.029353 env[1059]: time="2024-02-12T20:47:13.029313018Z" level=error msg="ContainerStatus for \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\": not found" Feb 12 20:47:13.029564 kubelet[1937]: E0212 20:47:13.029533 1937 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\": not found" containerID="7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95" Feb 12 20:47:13.029915 kubelet[1937]: I0212 20:47:13.029827 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95} err="failed to get container status \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b423eafecc9d6d5bec5550e018bc7bae3f0065818fb323bf85389e42f707b95\": not found" Feb 12 20:47:13.029975 kubelet[1937]: I0212 20:47:13.029945 1937 scope.go:115] "RemoveContainer" containerID="367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6" Feb 12 20:47:13.030328 env[1059]: time="2024-02-12T20:47:13.030287478Z" level=error msg="ContainerStatus for \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\": not found" Feb 12 20:47:13.030546 kubelet[1937]: E0212 20:47:13.030531 1937 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\": not found" containerID="367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6" Feb 12 20:47:13.030674 kubelet[1937]: I0212 20:47:13.030662 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6} err="failed to get container status \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"367abace071e3897ea19a4a1f02042e622c59d893220b710252c0e2625dd35c6\": not found" Feb 12 20:47:13.030778 kubelet[1937]: I0212 20:47:13.030767 1937 scope.go:115] "RemoveContainer" containerID="f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e" Feb 12 20:47:13.031802 env[1059]: time="2024-02-12T20:47:13.031776185Z" level=info msg="RemoveContainer for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\"" Feb 12 20:47:13.035636 env[1059]: time="2024-02-12T20:47:13.035605917Z" level=info msg="RemoveContainer for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" returns successfully" Feb 12 20:47:13.036134 kubelet[1937]: I0212 20:47:13.036119 1937 scope.go:115] "RemoveContainer" containerID="f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e" Feb 12 20:47:13.036442 env[1059]: time="2024-02-12T20:47:13.036389591Z" level=error msg="ContainerStatus for \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\": not found" Feb 12 20:47:13.038232 kubelet[1937]: E0212 20:47:13.038212 1937 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\": not found" containerID="f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e" Feb 12 20:47:13.039602 kubelet[1937]: I0212 20:47:13.039584 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e} err="failed to get container status \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f686c46dc483344f10810bac5aa2c5fec1c37c33ce7c8a25881133f26e3cae9e\": not found" Feb 12 20:47:13.120212 systemd[1]: var-lib-kubelet-pods-a43228a8\x2d2bb8\x2d42cd\x2db7d0\x2dfb83db9c1926-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22gxm.mount: Deactivated successfully. Feb 12 20:47:13.120691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda-rootfs.mount: Deactivated successfully. Feb 12 20:47:13.121047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda-shm.mount: Deactivated successfully. Feb 12 20:47:13.121222 systemd[1]: var-lib-kubelet-pods-c860114b\x2dc7ba\x2d474e\x2d9cda\x2d721b1119a31a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgztf7.mount: Deactivated successfully. Feb 12 20:47:13.121411 systemd[1]: var-lib-kubelet-pods-c860114b\x2dc7ba\x2d474e\x2d9cda\x2d721b1119a31a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:47:13.121566 systemd[1]: var-lib-kubelet-pods-c860114b\x2dc7ba\x2d474e\x2d9cda\x2d721b1119a31a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:47:14.223067 sshd[3618]: pam_unix(sshd:session): session closed for user core Feb 12 20:47:14.229637 systemd[1]: Started sshd@21-172.24.4.230:22-172.24.4.1:59652.service. Feb 12 20:47:14.231252 systemd[1]: sshd@20-172.24.4.230:22-172.24.4.1:59644.service: Deactivated successfully. Feb 12 20:47:14.236921 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:47:14.237296 systemd[1]: session-21.scope: Consumed 1.275s CPU time. Feb 12 20:47:14.239940 systemd-logind[1050]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:47:14.244445 systemd-logind[1050]: Removed session 21. Feb 12 20:47:14.341608 kubelet[1937]: I0212 20:47:14.341562 1937 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a43228a8-2bb8-42cd-b7d0-fb83db9c1926 path="/var/lib/kubelet/pods/a43228a8-2bb8-42cd-b7d0-fb83db9c1926/volumes" Feb 12 20:47:14.343568 kubelet[1937]: I0212 20:47:14.343535 1937 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c860114b-c7ba-474e-9cda-721b1119a31a path="/var/lib/kubelet/pods/c860114b-c7ba-474e-9cda-721b1119a31a/volumes" Feb 12 20:47:14.371038 kubelet[1937]: E0212 20:47:14.370961 1937 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:47:15.748305 sshd[3776]: Accepted publickey for core from 172.24.4.1 port 59652 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:47:15.751288 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:47:15.763847 systemd-logind[1050]: New session 22 of user core. Feb 12 20:47:15.763979 systemd[1]: Started session-22.scope. Feb 12 20:47:17.345944 kubelet[1937]: I0212 20:47:17.345482 1937 setters.go:548] "Node became not ready" node="ci-3510-3-2-8-90b6ad721e.novalocal" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:47:17.345409984 +0000 UTC m=+163.330073428 LastTransitionTime:2024-02-12 20:47:17.345409984 +0000 UTC m=+163.330073428 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:47:17.500316 kubelet[1937]: I0212 20:47:17.500267 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:47:17.501691 kubelet[1937]: E0212 20:47:17.501670 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c860114b-c7ba-474e-9cda-721b1119a31a" containerName="clean-cilium-state" Feb 12 20:47:17.501827 kubelet[1937]: E0212 20:47:17.501815 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c860114b-c7ba-474e-9cda-721b1119a31a" containerName="cilium-agent" Feb 12 20:47:17.501905 kubelet[1937]: E0212 20:47:17.501895 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c860114b-c7ba-474e-9cda-721b1119a31a" containerName="mount-bpf-fs" Feb 12 20:47:17.501977 kubelet[1937]: E0212 20:47:17.501966 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c860114b-c7ba-474e-9cda-721b1119a31a" containerName="mount-cgroup" Feb 12 20:47:17.502044 kubelet[1937]: E0212 20:47:17.502035 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c860114b-c7ba-474e-9cda-721b1119a31a" containerName="apply-sysctl-overwrites" Feb 12 20:47:17.502110 kubelet[1937]: E0212 20:47:17.502100 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a43228a8-2bb8-42cd-b7d0-fb83db9c1926" containerName="cilium-operator" Feb 12 20:47:17.502289 kubelet[1937]: I0212 20:47:17.502276 1937 memory_manager.go:346] "RemoveStaleState removing state" podUID="c860114b-c7ba-474e-9cda-721b1119a31a" containerName="cilium-agent" Feb 12 20:47:17.502358 kubelet[1937]: I0212 20:47:17.502348 1937 memory_manager.go:346] "RemoveStaleState removing state" podUID="a43228a8-2bb8-42cd-b7d0-fb83db9c1926" containerName="cilium-operator" Feb 12 20:47:17.512664 systemd[1]: Created slice kubepods-burstable-podfdfdca94_a311_49d8_90f9_e90b5e4b82ee.slice. Feb 12 20:47:17.604931 kubelet[1937]: I0212 20:47:17.604824 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-cgroup\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.606448 kubelet[1937]: I0212 20:47:17.606410 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-lib-modules\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.606531 kubelet[1937]: I0212 20:47:17.606470 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-config-path\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607462 kubelet[1937]: I0212 20:47:17.607428 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7p5\" (UniqueName: \"kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-kube-api-access-hr7p5\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607545 kubelet[1937]: I0212 20:47:17.607482 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-run\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607545 kubelet[1937]: I0212 20:47:17.607512 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-etc-cni-netd\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607620 kubelet[1937]: I0212 20:47:17.607550 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-xtables-lock\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607620 kubelet[1937]: I0212 20:47:17.607578 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-clustermesh-secrets\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607620 kubelet[1937]: I0212 20:47:17.607603 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hostproc\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607769 kubelet[1937]: I0212 20:47:17.607629 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cni-path\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607769 kubelet[1937]: I0212 20:47:17.607654 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-ipsec-secrets\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607769 kubelet[1937]: I0212 20:47:17.607685 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-net\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607769 kubelet[1937]: I0212 20:47:17.607733 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-bpf-maps\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607769 kubelet[1937]: I0212 20:47:17.607768 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hubble-tls\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.607945 kubelet[1937]: I0212 20:47:17.607803 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-kernel\") pod \"cilium-cdw6w\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " pod="kube-system/cilium-cdw6w" Feb 12 20:47:17.643003 sshd[3776]: pam_unix(sshd:session): session closed for user core Feb 12 20:47:17.654247 systemd[1]: Started sshd@22-172.24.4.230:22-172.24.4.1:34598.service. Feb 12 20:47:17.656793 systemd[1]: sshd@21-172.24.4.230:22-172.24.4.1:59652.service: Deactivated successfully. Feb 12 20:47:17.659446 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:47:17.662035 systemd[1]: session-22.scope: Consumed 1.195s CPU time. Feb 12 20:47:17.668333 systemd-logind[1050]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:47:17.671091 systemd-logind[1050]: Removed session 22. Feb 12 20:47:17.821419 env[1059]: time="2024-02-12T20:47:17.821323895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdw6w,Uid:fdfdca94-a311-49d8-90f9-e90b5e4b82ee,Namespace:kube-system,Attempt:0,}" Feb 12 20:47:17.844594 env[1059]: time="2024-02-12T20:47:17.844464283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:47:17.844963 env[1059]: time="2024-02-12T20:47:17.844596536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:47:17.844963 env[1059]: time="2024-02-12T20:47:17.844631614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:47:17.845341 env[1059]: time="2024-02-12T20:47:17.844922652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167 pid=3800 runtime=io.containerd.runc.v2 Feb 12 20:47:17.859138 systemd[1]: Started cri-containerd-52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167.scope. Feb 12 20:47:17.905333 env[1059]: time="2024-02-12T20:47:17.905285194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdw6w,Uid:fdfdca94-a311-49d8-90f9-e90b5e4b82ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\"" Feb 12 20:47:17.909529 env[1059]: time="2024-02-12T20:47:17.909485659Z" level=info msg="CreateContainer within sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:47:17.929284 env[1059]: time="2024-02-12T20:47:17.929225638Z" level=info msg="CreateContainer within sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\"" Feb 12 20:47:17.930385 env[1059]: time="2024-02-12T20:47:17.930355918Z" level=info msg="StartContainer for \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\"" Feb 12 20:47:17.949630 systemd[1]: Started cri-containerd-666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40.scope. Feb 12 20:47:17.962506 systemd[1]: cri-containerd-666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40.scope: Deactivated successfully. Feb 12 20:47:17.993046 env[1059]: time="2024-02-12T20:47:17.992969320Z" level=info msg="shim disconnected" id=666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40 Feb 12 20:47:17.993278 env[1059]: time="2024-02-12T20:47:17.993256933Z" level=warning msg="cleaning up after shim disconnected" id=666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40 namespace=k8s.io Feb 12 20:47:17.993444 env[1059]: time="2024-02-12T20:47:17.993409626Z" level=info msg="cleaning up dead shim" Feb 12 20:47:18.002846 env[1059]: time="2024-02-12T20:47:18.002761080Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3860 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:47:18Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:47:18.003303 env[1059]: time="2024-02-12T20:47:18.003176107Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Feb 12 20:47:18.003604 env[1059]: time="2024-02-12T20:47:18.003554725Z" level=error msg="Failed to pipe stderr of container \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\"" error="reading from a closed fifo" Feb 12 20:47:18.003765 env[1059]: time="2024-02-12T20:47:18.003732867Z" level=error msg="Failed to pipe stdout of container \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\"" error="reading from a closed fifo" Feb 12 20:47:18.006953 env[1059]: time="2024-02-12T20:47:18.006899096Z" level=error msg="StartContainer for \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:47:18.007329 kubelet[1937]: E0212 20:47:18.007220 1937 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40" Feb 12 20:47:18.010651 kubelet[1937]: E0212 20:47:18.010523 1937 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:47:18.010651 kubelet[1937]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:47:18.010651 kubelet[1937]: rm /hostbin/cilium-mount Feb 12 20:47:18.010651 kubelet[1937]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hr7p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cdw6w_kube-system(fdfdca94-a311-49d8-90f9-e90b5e4b82ee): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:47:18.011571 kubelet[1937]: E0212 20:47:18.011542 1937 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cdw6w" podUID=fdfdca94-a311-49d8-90f9-e90b5e4b82ee Feb 12 20:47:18.989371 env[1059]: time="2024-02-12T20:47:18.989048118Z" level=info msg="CreateContainer within sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 12 20:47:19.017692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520242997.mount: Deactivated successfully. Feb 12 20:47:19.037033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168370019.mount: Deactivated successfully. Feb 12 20:47:19.050382 env[1059]: time="2024-02-12T20:47:19.050298413Z" level=info msg="CreateContainer within sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\"" Feb 12 20:47:19.052657 env[1059]: time="2024-02-12T20:47:19.052603990Z" level=info msg="StartContainer for \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\"" Feb 12 20:47:19.076991 systemd[1]: Started cri-containerd-82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85.scope. Feb 12 20:47:19.086573 sshd[3786]: Accepted publickey for core from 172.24.4.1 port 34598 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:47:19.088269 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:47:19.096905 systemd-logind[1050]: New session 23 of user core. Feb 12 20:47:19.097557 systemd[1]: Started session-23.scope. Feb 12 20:47:19.099273 systemd[1]: cri-containerd-82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85.scope: Deactivated successfully. Feb 12 20:47:19.119314 env[1059]: time="2024-02-12T20:47:19.119224689Z" level=info msg="shim disconnected" id=82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85 Feb 12 20:47:19.119804 env[1059]: time="2024-02-12T20:47:19.119761099Z" level=warning msg="cleaning up after shim disconnected" id=82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85 namespace=k8s.io Feb 12 20:47:19.119993 env[1059]: time="2024-02-12T20:47:19.119957827Z" level=info msg="cleaning up dead shim" Feb 12 20:47:19.129931 env[1059]: time="2024-02-12T20:47:19.129894586Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3899 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:47:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:47:19.130318 env[1059]: time="2024-02-12T20:47:19.130254097Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Feb 12 20:47:19.132438 env[1059]: time="2024-02-12T20:47:19.132367063Z" level=error msg="Failed to pipe stdout of container \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\"" error="reading from a closed fifo" Feb 12 20:47:19.132523 env[1059]: time="2024-02-12T20:47:19.132482646Z" level=error msg="Failed to pipe stderr of container \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\"" error="reading from a closed fifo" Feb 12 20:47:19.134417 env[1059]: time="2024-02-12T20:47:19.134382734Z" level=error msg="StartContainer for \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:47:19.135528 kubelet[1937]: E0212 20:47:19.134766 1937 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85" Feb 12 20:47:19.135528 kubelet[1937]: E0212 20:47:19.134979 1937 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:47:19.135528 kubelet[1937]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:47:19.135528 kubelet[1937]: rm /hostbin/cilium-mount Feb 12 20:47:19.135928 kubelet[1937]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hr7p5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cdw6w_kube-system(fdfdca94-a311-49d8-90f9-e90b5e4b82ee): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:47:19.136036 kubelet[1937]: E0212 20:47:19.135027 1937 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cdw6w" podUID=fdfdca94-a311-49d8-90f9-e90b5e4b82ee Feb 12 20:47:19.372798 kubelet[1937]: E0212 20:47:19.372704 1937 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:47:19.987316 kubelet[1937]: I0212 20:47:19.987223 1937 scope.go:115] "RemoveContainer" containerID="666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40" Feb 12 20:47:19.988180 kubelet[1937]: I0212 20:47:19.988141 1937 scope.go:115] "RemoveContainer" containerID="666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40" Feb 12 20:47:19.995513 env[1059]: time="2024-02-12T20:47:19.995409734Z" level=info msg="RemoveContainer for \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\"" Feb 12 20:47:19.996275 env[1059]: time="2024-02-12T20:47:19.996204770Z" level=info msg="RemoveContainer for \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\"" Feb 12 20:47:19.996420 env[1059]: time="2024-02-12T20:47:19.996345510Z" level=error msg="RemoveContainer for \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\" failed" error="failed to set removing state for container \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\": container is already in removing state" Feb 12 20:47:19.996857 kubelet[1937]: E0212 20:47:19.996748 1937 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\": container is already in removing state" containerID="666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40" Feb 12 20:47:19.997105 kubelet[1937]: E0212 20:47:19.996932 1937 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40": container is already in removing state; Skipping pod "cilium-cdw6w_kube-system(fdfdca94-a311-49d8-90f9-e90b5e4b82ee)" Feb 12 20:47:19.997827 kubelet[1937]: E0212 20:47:19.997783 1937 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cdw6w_kube-system(fdfdca94-a311-49d8-90f9-e90b5e4b82ee)\"" pod="kube-system/cilium-cdw6w" podUID=fdfdca94-a311-49d8-90f9-e90b5e4b82ee Feb 12 20:47:20.020025 env[1059]: time="2024-02-12T20:47:20.019031712Z" level=info msg="RemoveContainer for \"666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40\" returns successfully" Feb 12 20:47:20.104706 sshd[3786]: pam_unix(sshd:session): session closed for user core Feb 12 20:47:20.114696 systemd[1]: sshd@22-172.24.4.230:22-172.24.4.1:34598.service: Deactivated successfully. Feb 12 20:47:20.116858 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:47:20.119012 systemd-logind[1050]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:47:20.123481 systemd[1]: Started sshd@23-172.24.4.230:22-172.24.4.1:34600.service. Feb 12 20:47:20.128087 systemd-logind[1050]: Removed session 23. Feb 12 20:47:20.992137 env[1059]: time="2024-02-12T20:47:20.992061160Z" level=info msg="StopPodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\"" Feb 12 20:47:20.992771 env[1059]: time="2024-02-12T20:47:20.992673597Z" level=info msg="Container to stop \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:47:20.995641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167-shm.mount: Deactivated successfully. Feb 12 20:47:21.012529 systemd[1]: cri-containerd-52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167.scope: Deactivated successfully. Feb 12 20:47:21.055342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167-rootfs.mount: Deactivated successfully. Feb 12 20:47:21.065570 env[1059]: time="2024-02-12T20:47:21.065506935Z" level=info msg="shim disconnected" id=52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167 Feb 12 20:47:21.066328 env[1059]: time="2024-02-12T20:47:21.066306792Z" level=warning msg="cleaning up after shim disconnected" id=52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167 namespace=k8s.io Feb 12 20:47:21.066418 env[1059]: time="2024-02-12T20:47:21.066401894Z" level=info msg="cleaning up dead shim" Feb 12 20:47:21.075220 env[1059]: time="2024-02-12T20:47:21.075163619Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\n" Feb 12 20:47:21.075780 env[1059]: time="2024-02-12T20:47:21.075753382Z" level=info msg="TearDown network for sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" successfully" Feb 12 20:47:21.075871 env[1059]: time="2024-02-12T20:47:21.075852351Z" level=info msg="StopPodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" returns successfully" Feb 12 20:47:21.109904 kubelet[1937]: W0212 20:47:21.109828 1937 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdfdca94_a311_49d8_90f9_e90b5e4b82ee.slice/cri-containerd-666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40.scope WatchSource:0}: container "666f4e2b6f996fcb171b4d813ee861c1efb42eaa340801d16e022c0a50abde40" in namespace "k8s.io": not found Feb 12 20:47:21.147036 kubelet[1937]: I0212 20:47:21.146607 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-lib-modules\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147036 kubelet[1937]: I0212 20:47:21.146707 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7p5\" (UniqueName: \"kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-kube-api-access-hr7p5\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147036 kubelet[1937]: I0212 20:47:21.146840 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-config-path\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147036 kubelet[1937]: I0212 20:47:21.146911 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-kernel\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147764 kubelet[1937]: I0212 20:47:21.147470 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-clustermesh-secrets\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147764 kubelet[1937]: I0212 20:47:21.147555 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hostproc\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147764 kubelet[1937]: I0212 20:47:21.147605 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cni-path\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.147764 kubelet[1937]: I0212 20:47:21.147667 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-ipsec-secrets\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.149781 kubelet[1937]: I0212 20:47:21.148209 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-cgroup\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.149781 kubelet[1937]: I0212 20:47:21.148286 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-run\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.149781 kubelet[1937]: I0212 20:47:21.148336 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-xtables-lock\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.149781 kubelet[1937]: I0212 20:47:21.148398 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hubble-tls\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.149781 kubelet[1937]: I0212 20:47:21.148454 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-net\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.149781 kubelet[1937]: I0212 20:47:21.148508 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-bpf-maps\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.150480 kubelet[1937]: I0212 20:47:21.148561 1937 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-etc-cni-netd\") pod \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\" (UID: \"fdfdca94-a311-49d8-90f9-e90b5e4b82ee\") " Feb 12 20:47:21.150480 kubelet[1937]: I0212 20:47:21.148676 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.150480 kubelet[1937]: I0212 20:47:21.148780 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.153183 kubelet[1937]: W0212 20:47:21.153088 1937 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fdfdca94-a311-49d8-90f9-e90b5e4b82ee/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:47:21.157116 kubelet[1937]: I0212 20:47:21.155148 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.157116 kubelet[1937]: I0212 20:47:21.155860 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hostproc" (OuterVolumeSpecName: "hostproc") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.157116 kubelet[1937]: I0212 20:47:21.156017 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cni-path" (OuterVolumeSpecName: "cni-path") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.157116 kubelet[1937]: I0212 20:47:21.156071 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.157116 kubelet[1937]: I0212 20:47:21.156113 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.159538 kubelet[1937]: I0212 20:47:21.156153 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.159538 kubelet[1937]: I0212 20:47:21.156193 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.159538 kubelet[1937]: I0212 20:47:21.156576 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:47:21.179168 kubelet[1937]: I0212 20:47:21.173394 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:47:21.177907 systemd[1]: var-lib-kubelet-pods-fdfdca94\x2da311\x2d49d8\x2d90f9\x2de90b5e4b82ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr7p5.mount: Deactivated successfully. Feb 12 20:47:21.184298 kubelet[1937]: I0212 20:47:21.184236 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-kube-api-access-hr7p5" (OuterVolumeSpecName: "kube-api-access-hr7p5") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "kube-api-access-hr7p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:47:21.188448 systemd[1]: var-lib-kubelet-pods-fdfdca94\x2da311\x2d49d8\x2d90f9\x2de90b5e4b82ee-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:47:21.193282 kubelet[1937]: I0212 20:47:21.193184 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:47:21.205804 systemd[1]: var-lib-kubelet-pods-fdfdca94\x2da311\x2d49d8\x2d90f9\x2de90b5e4b82ee-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:47:21.210040 kubelet[1937]: I0212 20:47:21.209913 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:47:21.210503 kubelet[1937]: I0212 20:47:21.210447 1937 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fdfdca94-a311-49d8-90f9-e90b5e4b82ee" (UID: "fdfdca94-a311-49d8-90f9-e90b5e4b82ee"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:47:21.249177 kubelet[1937]: I0212 20:47:21.249090 1937 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-hr7p5\" (UniqueName: \"kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-kube-api-access-hr7p5\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249346 kubelet[1937]: I0212 20:47:21.249334 1937 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-lib-modules\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249447 kubelet[1937]: I0212 20:47:21.249436 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-config-path\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249545 kubelet[1937]: I0212 20:47:21.249535 1937 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-kernel\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249641 kubelet[1937]: I0212 20:47:21.249631 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-ipsec-secrets\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249771 kubelet[1937]: I0212 20:47:21.249760 1937 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-clustermesh-secrets\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249871 kubelet[1937]: I0212 20:47:21.249861 1937 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hostproc\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.249966 kubelet[1937]: I0212 20:47:21.249956 1937 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cni-path\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250060 kubelet[1937]: I0212 20:47:21.250051 1937 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-hubble-tls\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250165 kubelet[1937]: I0212 20:47:21.250155 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-cgroup\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250261 kubelet[1937]: I0212 20:47:21.250251 1937 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-cilium-run\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250360 kubelet[1937]: I0212 20:47:21.250350 1937 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-xtables-lock\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250471 kubelet[1937]: I0212 20:47:21.250461 1937 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-host-proc-sys-net\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250565 kubelet[1937]: I0212 20:47:21.250555 1937 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-bpf-maps\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.250661 kubelet[1937]: I0212 20:47:21.250649 1937 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdfdca94-a311-49d8-90f9-e90b5e4b82ee-etc-cni-netd\") on node \"ci-3510-3-2-8-90b6ad721e.novalocal\" DevicePath \"\"" Feb 12 20:47:21.556793 sshd[3920]: Accepted publickey for core from 172.24.4.1 port 34600 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:47:21.561241 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:47:21.573866 systemd-logind[1050]: New session 24 of user core. Feb 12 20:47:21.577594 systemd[1]: Started session-24.scope. Feb 12 20:47:21.997601 systemd[1]: var-lib-kubelet-pods-fdfdca94\x2da311\x2d49d8\x2d90f9\x2de90b5e4b82ee-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:47:22.003118 kubelet[1937]: I0212 20:47:22.003062 1937 scope.go:115] "RemoveContainer" containerID="82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85" Feb 12 20:47:22.016353 systemd[1]: Removed slice kubepods-burstable-podfdfdca94_a311_49d8_90f9_e90b5e4b82ee.slice. Feb 12 20:47:22.026672 env[1059]: time="2024-02-12T20:47:22.025932492Z" level=info msg="RemoveContainer for \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\"" Feb 12 20:47:22.036245 env[1059]: time="2024-02-12T20:47:22.036157236Z" level=info msg="RemoveContainer for \"82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85\" returns successfully" Feb 12 20:47:22.095384 kubelet[1937]: I0212 20:47:22.095327 1937 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:47:22.095595 kubelet[1937]: E0212 20:47:22.095429 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdfdca94-a311-49d8-90f9-e90b5e4b82ee" containerName="mount-cgroup" Feb 12 20:47:22.095595 kubelet[1937]: I0212 20:47:22.095471 1937 memory_manager.go:346] "RemoveStaleState removing state" podUID="fdfdca94-a311-49d8-90f9-e90b5e4b82ee" containerName="mount-cgroup" Feb 12 20:47:22.095595 kubelet[1937]: I0212 20:47:22.095502 1937 memory_manager.go:346] "RemoveStaleState removing state" podUID="fdfdca94-a311-49d8-90f9-e90b5e4b82ee" containerName="mount-cgroup" Feb 12 20:47:22.095595 kubelet[1937]: E0212 20:47:22.095531 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fdfdca94-a311-49d8-90f9-e90b5e4b82ee" containerName="mount-cgroup" Feb 12 20:47:22.102204 systemd[1]: Created slice kubepods-burstable-pod129d1002_d0d4_4857_bbc0_27c0b3b91397.slice. Feb 12 20:47:22.123902 kubelet[1937]: W0212 20:47:22.123858 1937 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-8-90b6ad721e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-8-90b6ad721e.novalocal' and this object Feb 12 20:47:22.126824 kubelet[1937]: E0212 20:47:22.126791 1937 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-8-90b6ad721e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-8-90b6ad721e.novalocal' and this object Feb 12 20:47:22.158474 kubelet[1937]: I0212 20:47:22.158394 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-hostproc\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.158752 kubelet[1937]: I0212 20:47:22.158739 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-etc-cni-netd\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.158928 kubelet[1937]: I0212 20:47:22.158903 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/129d1002-d0d4-4857-bbc0-27c0b3b91397-clustermesh-secrets\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159028 kubelet[1937]: I0212 20:47:22.159017 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-host-proc-sys-kernel\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159121 kubelet[1937]: I0212 20:47:22.159111 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/129d1002-d0d4-4857-bbc0-27c0b3b91397-cilium-ipsec-secrets\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159211 kubelet[1937]: I0212 20:47:22.159201 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-bpf-maps\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159308 kubelet[1937]: I0212 20:47:22.159297 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2w5\" (UniqueName: \"kubernetes.io/projected/129d1002-d0d4-4857-bbc0-27c0b3b91397-kube-api-access-pt2w5\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159407 kubelet[1937]: I0212 20:47:22.159396 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-xtables-lock\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159518 kubelet[1937]: I0212 20:47:22.159497 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-cilium-run\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159605 kubelet[1937]: I0212 20:47:22.159594 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-cni-path\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159694 kubelet[1937]: I0212 20:47:22.159683 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-cilium-cgroup\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159813 kubelet[1937]: I0212 20:47:22.159801 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-lib-modules\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159909 kubelet[1937]: I0212 20:47:22.159899 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/129d1002-d0d4-4857-bbc0-27c0b3b91397-cilium-config-path\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.159997 kubelet[1937]: I0212 20:47:22.159987 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/129d1002-d0d4-4857-bbc0-27c0b3b91397-host-proc-sys-net\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.160106 kubelet[1937]: I0212 20:47:22.160096 1937 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/129d1002-d0d4-4857-bbc0-27c0b3b91397-hubble-tls\") pod \"cilium-lqwgp\" (UID: \"129d1002-d0d4-4857-bbc0-27c0b3b91397\") " pod="kube-system/cilium-lqwgp" Feb 12 20:47:22.335061 env[1059]: time="2024-02-12T20:47:22.334541747Z" level=info msg="StopPodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\"" Feb 12 20:47:22.335061 env[1059]: time="2024-02-12T20:47:22.334811105Z" level=info msg="TearDown network for sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" successfully" Feb 12 20:47:22.335061 env[1059]: time="2024-02-12T20:47:22.334899004Z" level=info msg="StopPodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" returns successfully" Feb 12 20:47:22.336086 kubelet[1937]: I0212 20:47:22.335456 1937 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fdfdca94-a311-49d8-90f9-e90b5e4b82ee path="/var/lib/kubelet/pods/fdfdca94-a311-49d8-90f9-e90b5e4b82ee/volumes" Feb 12 20:47:23.009997 env[1059]: time="2024-02-12T20:47:23.009227528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqwgp,Uid:129d1002-d0d4-4857-bbc0-27c0b3b91397,Namespace:kube-system,Attempt:0,}" Feb 12 20:47:23.043617 env[1059]: time="2024-02-12T20:47:23.043390837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:47:23.043617 env[1059]: time="2024-02-12T20:47:23.043537157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:47:23.044294 env[1059]: time="2024-02-12T20:47:23.043571914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:47:23.044837 env[1059]: time="2024-02-12T20:47:23.044680844Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a pid=3975 runtime=io.containerd.runc.v2 Feb 12 20:47:23.086445 systemd[1]: run-containerd-runc-k8s.io-8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a-runc.acTWdA.mount: Deactivated successfully. Feb 12 20:47:23.094200 systemd[1]: Started cri-containerd-8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a.scope. Feb 12 20:47:23.123052 env[1059]: time="2024-02-12T20:47:23.122980212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lqwgp,Uid:129d1002-d0d4-4857-bbc0-27c0b3b91397,Namespace:kube-system,Attempt:0,} returns sandbox id \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\"" Feb 12 20:47:23.129000 env[1059]: time="2024-02-12T20:47:23.128957781Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:47:23.149478 env[1059]: time="2024-02-12T20:47:23.149429094Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb\"" Feb 12 20:47:23.152549 env[1059]: time="2024-02-12T20:47:23.152518167Z" level=info msg="StartContainer for \"e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb\"" Feb 12 20:47:23.170872 systemd[1]: Started cri-containerd-e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb.scope. Feb 12 20:47:23.209069 env[1059]: time="2024-02-12T20:47:23.209012049Z" level=info msg="StartContainer for \"e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb\" returns successfully" Feb 12 20:47:23.233947 systemd[1]: cri-containerd-e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb.scope: Deactivated successfully. Feb 12 20:47:23.265881 env[1059]: time="2024-02-12T20:47:23.265740502Z" level=info msg="shim disconnected" id=e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb Feb 12 20:47:23.266152 env[1059]: time="2024-02-12T20:47:23.266131904Z" level=warning msg="cleaning up after shim disconnected" id=e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb namespace=k8s.io Feb 12 20:47:23.266226 env[1059]: time="2024-02-12T20:47:23.266211175Z" level=info msg="cleaning up dead shim" Feb 12 20:47:23.274873 env[1059]: time="2024-02-12T20:47:23.274815480Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Feb 12 20:47:24.029358 env[1059]: time="2024-02-12T20:47:24.022372638Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:47:24.029478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717129781.mount: Deactivated successfully. Feb 12 20:47:24.058293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987670904.mount: Deactivated successfully. Feb 12 20:47:24.064113 env[1059]: time="2024-02-12T20:47:24.064010252Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd\"" Feb 12 20:47:24.065780 env[1059]: time="2024-02-12T20:47:24.065669197Z" level=info msg="StartContainer for \"91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd\"" Feb 12 20:47:24.107816 systemd[1]: Started cri-containerd-91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd.scope. Feb 12 20:47:24.146112 env[1059]: time="2024-02-12T20:47:24.146060132Z" level=info msg="StartContainer for \"91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd\" returns successfully" Feb 12 20:47:24.165747 systemd[1]: cri-containerd-91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd.scope: Deactivated successfully. Feb 12 20:47:24.192682 env[1059]: time="2024-02-12T20:47:24.192617840Z" level=info msg="shim disconnected" id=91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd Feb 12 20:47:24.193151 env[1059]: time="2024-02-12T20:47:24.193123880Z" level=warning msg="cleaning up after shim disconnected" id=91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd namespace=k8s.io Feb 12 20:47:24.193242 env[1059]: time="2024-02-12T20:47:24.193226958Z" level=info msg="cleaning up dead shim" Feb 12 20:47:24.215467 env[1059]: time="2024-02-12T20:47:24.215422057Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4122 runtime=io.containerd.runc.v2\n" Feb 12 20:47:24.219561 kubelet[1937]: W0212 20:47:24.218940 1937 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdfdca94_a311_49d8_90f9_e90b5e4b82ee.slice/cri-containerd-82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85.scope WatchSource:0}: container "82b5874d3d529ca2c64324e71deb14e5fa8a620902484d9079f351db2d93db85" in namespace "k8s.io": not found Feb 12 20:47:24.374658 kubelet[1937]: E0212 20:47:24.374618 1937 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:47:25.030340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd-rootfs.mount: Deactivated successfully. Feb 12 20:47:25.053737 env[1059]: time="2024-02-12T20:47:25.053572819Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:47:25.089743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513270894.mount: Deactivated successfully. Feb 12 20:47:25.111409 env[1059]: time="2024-02-12T20:47:25.111309434Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629\"" Feb 12 20:47:25.113750 env[1059]: time="2024-02-12T20:47:25.112928720Z" level=info msg="StartContainer for \"1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629\"" Feb 12 20:47:25.139131 systemd[1]: Started cri-containerd-1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629.scope. Feb 12 20:47:25.183171 env[1059]: time="2024-02-12T20:47:25.183110197Z" level=info msg="StartContainer for \"1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629\" returns successfully" Feb 12 20:47:25.187359 systemd[1]: cri-containerd-1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629.scope: Deactivated successfully. Feb 12 20:47:25.217475 env[1059]: time="2024-02-12T20:47:25.217390186Z" level=info msg="shim disconnected" id=1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629 Feb 12 20:47:25.217805 env[1059]: time="2024-02-12T20:47:25.217783390Z" level=warning msg="cleaning up after shim disconnected" id=1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629 namespace=k8s.io Feb 12 20:47:25.217904 env[1059]: time="2024-02-12T20:47:25.217885426Z" level=info msg="cleaning up dead shim" Feb 12 20:47:25.225830 env[1059]: time="2024-02-12T20:47:25.225772435Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Feb 12 20:47:26.030325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629-rootfs.mount: Deactivated successfully. Feb 12 20:47:26.047265 env[1059]: time="2024-02-12T20:47:26.047107001Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:47:26.085984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584752277.mount: Deactivated successfully. Feb 12 20:47:26.109735 env[1059]: time="2024-02-12T20:47:26.109598766Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17\"" Feb 12 20:47:26.112211 env[1059]: time="2024-02-12T20:47:26.112130245Z" level=info msg="StartContainer for \"556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17\"" Feb 12 20:47:26.145850 systemd[1]: Started cri-containerd-556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17.scope. Feb 12 20:47:26.188334 systemd[1]: cri-containerd-556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17.scope: Deactivated successfully. Feb 12 20:47:26.189855 env[1059]: time="2024-02-12T20:47:26.189786213Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129d1002_d0d4_4857_bbc0_27c0b3b91397.slice/cri-containerd-556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17.scope/memory.events\": no such file or directory" Feb 12 20:47:26.194894 env[1059]: time="2024-02-12T20:47:26.194860813Z" level=info msg="StartContainer for \"556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17\" returns successfully" Feb 12 20:47:26.222058 env[1059]: time="2024-02-12T20:47:26.222013433Z" level=info msg="shim disconnected" id=556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17 Feb 12 20:47:26.222256 env[1059]: time="2024-02-12T20:47:26.222237922Z" level=warning msg="cleaning up after shim disconnected" id=556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17 namespace=k8s.io Feb 12 20:47:26.222320 env[1059]: time="2024-02-12T20:47:26.222306556Z" level=info msg="cleaning up dead shim" Feb 12 20:47:26.244931 env[1059]: time="2024-02-12T20:47:26.244857767Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:47:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4240 runtime=io.containerd.runc.v2\n" Feb 12 20:47:27.030294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17-rootfs.mount: Deactivated successfully. Feb 12 20:47:27.049413 env[1059]: time="2024-02-12T20:47:27.049378165Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:47:27.089522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417130654.mount: Deactivated successfully. Feb 12 20:47:27.116615 env[1059]: time="2024-02-12T20:47:27.116523689Z" level=info msg="CreateContainer within sandbox \"8144958fc7100de0cd47382923a3d6d8270b04db73e79caf3ce0c6312d6bab3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428\"" Feb 12 20:47:27.122140 env[1059]: time="2024-02-12T20:47:27.122074073Z" level=info msg="StartContainer for \"0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428\"" Feb 12 20:47:27.157273 systemd[1]: Started cri-containerd-0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428.scope. Feb 12 20:47:27.206546 env[1059]: time="2024-02-12T20:47:27.206468560Z" level=info msg="StartContainer for \"0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428\" returns successfully" Feb 12 20:47:27.333544 kubelet[1937]: W0212 20:47:27.333488 1937 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129d1002_d0d4_4857_bbc0_27c0b3b91397.slice/cri-containerd-e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb.scope WatchSource:0}: task e156a79e4360782e222e14f60b7300e986a65f1337b144bf5773ceca36c9d9bb not found: not found Feb 12 20:47:28.081907 kubelet[1937]: I0212 20:47:28.081862 1937 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lqwgp" podStartSLOduration=6.081819318 pod.CreationTimestamp="2024-02-12 20:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:47:28.081328322 +0000 UTC m=+174.065991756" watchObservedRunningTime="2024-02-12 20:47:28.081819318 +0000 UTC m=+174.066482732" Feb 12 20:47:28.083752 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:47:28.131767 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 12 20:47:28.431467 systemd[1]: run-containerd-runc-k8s.io-0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428-runc.ojvH5c.mount: Deactivated successfully. Feb 12 20:47:30.453968 kubelet[1937]: W0212 20:47:30.453859 1937 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129d1002_d0d4_4857_bbc0_27c0b3b91397.slice/cri-containerd-91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd.scope WatchSource:0}: task 91f1cae905181c08b822852c59ace3bfcaaae632405ba04260879f0064e102bd not found: not found Feb 12 20:47:30.720517 systemd[1]: run-containerd-runc-k8s.io-0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428-runc.Lc75kk.mount: Deactivated successfully. Feb 12 20:47:31.347265 systemd-networkd[980]: lxc_health: Link UP Feb 12 20:47:31.363750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:47:31.367901 systemd-networkd[980]: lxc_health: Gained carrier Feb 12 20:47:32.960079 systemd[1]: run-containerd-runc-k8s.io-0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428-runc.7Zz4c7.mount: Deactivated successfully. Feb 12 20:47:33.278333 systemd-networkd[980]: lxc_health: Gained IPv6LL Feb 12 20:47:33.571049 kubelet[1937]: W0212 20:47:33.570998 1937 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129d1002_d0d4_4857_bbc0_27c0b3b91397.slice/cri-containerd-1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629.scope WatchSource:0}: task 1cd6236879b5828057d467ee728e56368993f8fc077999b174fe27a1e0695629 not found: not found Feb 12 20:47:34.216026 env[1059]: time="2024-02-12T20:47:34.215840875Z" level=info msg="StopPodSandbox for \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\"" Feb 12 20:47:34.216026 env[1059]: time="2024-02-12T20:47:34.215934557Z" level=info msg="TearDown network for sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" successfully" Feb 12 20:47:34.216026 env[1059]: time="2024-02-12T20:47:34.215971876Z" level=info msg="StopPodSandbox for \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" returns successfully" Feb 12 20:47:34.216663 env[1059]: time="2024-02-12T20:47:34.216601358Z" level=info msg="RemovePodSandbox for \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\"" Feb 12 20:47:34.216783 env[1059]: time="2024-02-12T20:47:34.216680433Z" level=info msg="Forcibly stopping sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\"" Feb 12 20:47:34.216922 env[1059]: time="2024-02-12T20:47:34.216880719Z" level=info msg="TearDown network for sandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" successfully" Feb 12 20:47:34.222964 env[1059]: time="2024-02-12T20:47:34.222879246Z" level=info msg="RemovePodSandbox \"1aac7fd12286de6b928faf39b2ca200cc083dfc187cb025fcc42a065813eeeda\" returns successfully" Feb 12 20:47:34.223934 env[1059]: time="2024-02-12T20:47:34.223884407Z" level=info msg="StopPodSandbox for \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\"" Feb 12 20:47:34.224125 env[1059]: time="2024-02-12T20:47:34.224047166Z" level=info msg="TearDown network for sandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" successfully" Feb 12 20:47:34.224172 env[1059]: time="2024-02-12T20:47:34.224125689Z" level=info msg="StopPodSandbox for \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" returns successfully" Feb 12 20:47:34.225793 env[1059]: time="2024-02-12T20:47:34.224575643Z" level=info msg="RemovePodSandbox for \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\"" Feb 12 20:47:34.225793 env[1059]: time="2024-02-12T20:47:34.224614995Z" level=info msg="Forcibly stopping sandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\"" Feb 12 20:47:34.225793 env[1059]: time="2024-02-12T20:47:34.224696555Z" level=info msg="TearDown network for sandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" successfully" Feb 12 20:47:34.228686 env[1059]: time="2024-02-12T20:47:34.228643523Z" level=info msg="RemovePodSandbox \"35b40f630aae82df8454ad46acbb63e6d9b28f4a64e553c799d8886721e22e24\" returns successfully" Feb 12 20:47:34.229464 env[1059]: time="2024-02-12T20:47:34.229419986Z" level=info msg="StopPodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\"" Feb 12 20:47:34.229860 env[1059]: time="2024-02-12T20:47:34.229701831Z" level=info msg="TearDown network for sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" successfully" Feb 12 20:47:34.230064 env[1059]: time="2024-02-12T20:47:34.230020395Z" level=info msg="StopPodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" returns successfully" Feb 12 20:47:34.230891 env[1059]: time="2024-02-12T20:47:34.230844765Z" level=info msg="RemovePodSandbox for \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\"" Feb 12 20:47:34.231118 env[1059]: time="2024-02-12T20:47:34.231047386Z" level=info msg="Forcibly stopping sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\"" Feb 12 20:47:34.231355 env[1059]: time="2024-02-12T20:47:34.231312942Z" level=info msg="TearDown network for sandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" successfully" Feb 12 20:47:34.239071 env[1059]: time="2024-02-12T20:47:34.239009348Z" level=info msg="RemovePodSandbox \"52a80b31cebb820e3b19bcba09e3cfe5c16c76e797e102242ef8365fd4ca5167\" returns successfully" Feb 12 20:47:35.188922 systemd[1]: run-containerd-runc-k8s.io-0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428-runc.JauDUR.mount: Deactivated successfully. Feb 12 20:47:36.682078 kubelet[1937]: W0212 20:47:36.681948 1937 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod129d1002_d0d4_4857_bbc0_27c0b3b91397.slice/cri-containerd-556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17.scope WatchSource:0}: task 556908e3cf1db8119617576bce2cbdd0eb86b3140a158862f1c0d9dad6469f17 not found: not found Feb 12 20:47:37.442789 systemd[1]: run-containerd-runc-k8s.io-0160e329dbfad094148961d2f0faa713fd5fb9c37d40a13936b10f598b078428-runc.eoVcsL.mount: Deactivated successfully. Feb 12 20:47:37.802299 sshd[3920]: pam_unix(sshd:session): session closed for user core Feb 12 20:47:37.822676 systemd[1]: sshd@23-172.24.4.230:22-172.24.4.1:34600.service: Deactivated successfully. Feb 12 20:47:37.824339 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 20:47:37.825819 systemd-logind[1050]: Session 24 logged out. Waiting for processes to exit. Feb 12 20:47:37.827797 systemd-logind[1050]: Removed session 24.